AMD FidelityFX Super Resolution

I don't think there is much debate on if DLSS looks better than FSR. Not really up for contention imho. DLSS 2.x does look better.

What they are trying to really draw more attention to is their drivers already having a spatial scaler built in. And while it is similar in that is use's the Lanczos method it's missing parts of what makes FSR better. In that FSR resolves edges better and resolves ringing issues with the regular Lanczos resizing methods. And when FSR is built in by the developers, they can choose to not have everything scaled by FSR. Allowing for other elements to remain properly sharp at native resolution.

I also hope this gives AMD a nudge to implementing an FSR like scaler driver side.
AMD recommends good TAA as well, meaning their is a temporal component for the game frame being resolved. Yes no AI for that frame but in most cases there is TAA being done with motion vectors. In addition, any developer can add onto the code for their game and make it better due to being open source. Does look like DLSS 2.3 resolved the final issues I've had with DLSS, wonder if one can just drop 2.3 DLL into a game using a 2.x version and if it will work and give all the benefits? Now AMD could add an AI component to FSR as in an optional AI stage, same stage or replacement for TAA, before going to FSR frame rendering, making it maintain compatibility for most cards with or without AI capability. Also FSR is available for developers in Unity as well as Unreal Engines.
 
Last edited:
Yes, except they kinda did before they released it
View attachment 405598
Four smaller consecutive frames => one bigger frame
FSR doesn't work like that at all. FSR takes single image as input and upscales and AMD do not advertise it like they did back when it was "in development".

My guess is that they had something better planned but realized their plan was too ambitious and they would not be able to make it both fast and universal so they made it fast and universal by using something simpler and apparently it works...


"subtle difference" my ass

If AMD actually made it use temporal data it could be an actually good upscaling solution that could rival DLSS even without using any AI stuff.
There is actually a possible reason for that. AMD had multiple different teams going working on FSR. All of them where not the same.

https://www.techradar.com/news/we-w...ing-the-game-with-fidelityfx-super-resolution

Also funny tidbit from that article. The team who's solution was chosen and is now FSR 1 was Timothy Lottes team. Name might not be familiar, but it should be. He's pretty much the guy credited for creating FXAA for nVida, along with TXAA/TAA, helped create AMD's FidelityFX CAS/Radeon Image Sharpening etc etc
 
Last edited:
There is actually a possible reason for that. AMD had multiple different teams going working on FSR. All of them where not the same.

https://www.techradar.com/news/we-w...ing-the-game-with-fidelityfx-super-resolution

Also funny tidbit from that article. The team who's solution was chosen and is now FSR 1 was Timothy Lottes team. Name might not be familiar, but it should be. He's pretty much the guy credited for creating FXAA for nVida, along with TXAA/TAA, helped create AMD's FidelityFX CAS/Radeon Image Sharpening etc etc
Timothy Lottes presentation at Siggraph 2021 dealing with FSR. Great read. Works for Unity now, previously AMD.

https://advances.realtimerendering.com/s2021/Unity AMD FSR - SIGGRAPH 2021.pdf
 
I did some testing in Cyberpunk. DLSS plays in different league so I should not even attempt at show pics which show complete annihilation of FSR
What I found rather strange is that while I seemed to like how FSR looks the more I used it the weirder it all started looking to me
1637260434377.png

This is not how this game is supposed to look with these weird blob-like shapes. It is even worse with RCAS.

1637260486152.png

This is how this game looks like. Nice and classy.

This was with 1080p and obviously at 2x scaling ratios FSR starts to look bad.
At 1440p it was all better but then when I compared normal GPU upscaling to FSR the latter didn't seem all that great of an improvement either. Bilinear upscale looks blurred hence bad but GPU upscaler does not look blurry. It actually looks sharper and has less blobby appearance. In either way difference is pretty small.
The biggest difference is performance. With running everything at 1440p it is considerably better, VRR works and input lag is less.

So yeah, Magpie great tool to waste performance on slightly more cany-like apperance of your games.
For games which actually have FSR and where sharpening can be tweaked it might be an option but otherwise even when playing at lower than native resolution the best solution is GPU scaling. Integer at 1/2, 1/3, etc. ratios and normal GPU scaling at other ratios.


There is actually a possible reason for that. AMD had multiple different teams going working on FSR. All of them where not the same.

https://www.techradar.com/news/we-w...ing-the-game-with-fidelityfx-super-resolution

Also funny tidbit from that article. The team who's solution was chosen and is now FSR 1 was Timothy Lottes team. Name might not be familiar, but it should be. He's pretty much the guy credited for creating FXAA for nVida, along with TXAA/TAA, helped create AMD's FidelityFX CAS/Radeon Image Sharpening etc etc
AMD will bring its temporal solution eventually but it probably won't be as universal if it will be universal at all. They do not develop stuff to then not use it.
When they will announce FSR2 with slightly less ambitious plan on universality then I will know they finally commit for temporal solution :)
 
You mean a temporal solution that can work across hardware in a sort of universally hardware agnostic fashion like FSR does? EPIC has already done it, see UE5's Temporal Super Resolution and the temporal upsampler that already exists in UE4(Funny, Timothy Lottes worked on Unreal Engine 4. Dude is awesome lol). Plus Intel's XeSS is also hardware agnostic.

It would be awesome if the next version of FSR was temporal and open source again.
 
  • Like
Reactions: XoR_
like this
You mean a temporal solution that can work across hardware in a sort of universally hardware agnostic fashion like FSR does? EPIC has already done it, see UE5's Temporal Super Resolution and the temporal upsampler that already exists in UE4(Funny, Timothy Lottes worked on Unreal Engine 4. Dude is awesome lol). Plus Intel's XeSS is also hardware agnostic.
Future of gaming at high resolution looks bright indeed :)
AMD should join the club as soon as possible. Not however before they have solution that is actually good.

It would be awesome if the next version of FSR was temporal and open source again.
I am almost certain FSR 2.0 will temporal.
It is the only way forward that makes any sense. TAA is also something that needs to be updated by something more modern and it just makes perfect sense to construct higher resolution image from gathered temporal data.

Spatial upscalers are near their limits of what is feasible. Perhaps such algorithm could be greatly improved by throwing much more performance at them but as a way to use them to improve performance and not tank it they are at the limit. Besides no matter how good your spatial upscaler is it won't guess what is the source image. Nvidia already tried that with first DLSS and while it kinda worked it was more a novelty than really usable solution.
 
Inspired by idea behing FSR I set out to develop my own Lanczos based upscaler
1637502264742.png


FSR seems to use some sort of additional filter. I only detect where to use Lanczos and where to use something else.
1637502290032.png


And here Lanczos I used as a base
1637502325173.png


Lanczos, NIS and even FSR all have ringing artifacts. My XIS (Xor Image Scaling) does not make such compromises 😎

Debug view (showing what is being drawn using Lanczos)
1637502425792.png


This view is cool by itself...
1637502551339.png


Gotta put video from using it. Looks like raibows on oil and it all moves as I fly around the base.

I am still optimizing this upscaler but I will put it on Github. It is less "sharp" but does not make everything look cartoonish either. Great for games and videos.
Works with Magpie so supports all games 🤩

Took me less than one day. I did not use FSR source code at all and rather went with my understanding of how this can be done. I am pretty certain the most effort on AMD part was tweaking sliders and optimization (at which point they butchered quality), not figuring out how to do it. Of course person who first came up with the idea had to be brilliant.

And now I am gonna play some more Rainbow Overload 🤪
 
Kinda looking forward to seeing how it turns out XoR. Makes me a little jelly, as I gave up programming years ago :D
 
https://videocardz.com/newz/amd-releases-fidelityfx-super-resolution-plugin-for-unreal-engine-4

It is possible to make the AMD FSR UE4 plugin work with TAAU (Temporal Anti-Aliasing with Upsampling) through a Hybrid Upscaling mode. FSR might then act as a secondary upscaler, otherwise, TAAU upscaling method will take priority.

UE4's TAAU in conjunction with FSR... that's a curious thought. Not sure how good that would really look . TAAU isn't exactly awesome. But it is the precursor to UE5's Temporal Super Resolution. Which AMD worked with EPIC on as well.
 
I have older (2017) Lenovo laptop with the AMD 7920 A12 Radeon 7 APU .. I took Terminator Resistance and ran it on driver 21.2.3 (last) for this platform and ran the game at 1366 x 768 native on low and was getting about 18 fps .. FSR is baked into the game ..so I could use it and balance setting pushed me up around 40 fps with ReLive recoding .. game is very playable now .
 
I have older (2017) Lenovo laptop with the AMD 7920 A12 Radeon 7 APU .. I took Terminator Resistance and ran it on driver 21.2.3 (last) for this platform and ran the game at 1366 x 768 native on low and was getting about 18 fps .. FSR is baked into the game ..so I could use it and balance setting pushed me up around 40 fps with ReLive recoding .. game is very playable now .

I also tried this game this weekend on my R9 Fury. At 4K all epic settings I was averaging 34-36 FPS (no AA). With FSR on Ultra it jumped to ~51 FPS, and at high it was ~64 FPS. Visual quality change was barely noticeable. Massive improvement to say the least.
 
I know this is not what we are talking about other then showing the laptop working in Sniper Elite III .. my fps got better later on as the driver window was showing 47 fps after maybe some background stuff was finished , it only has 8Gb of memory for cpu and gpu but I will get a video of FSR working on Terminator Resistance and also Lossless scaling.

 
Bummer. Guess they locked out down since I posted that originally bank in December. https://hardforum.com/threads/sapph...aling-for-all-radeon-6800-6900-cards.2005756/

Radeon Super Resolution tech (for RDNA1 & 2 cards) works with games that support exclusive full-screen mode

"Didn't want to include this into the article, simply because there aren't many details yet, but RSR might look like Sapphire's Trixx Boost. It should work the same way, except the scaling will be handled by FSR algorithm." ~Videocardz

AMD to introduce Radeon Super Resolution (RSR) technology that works in “all” games​


https://videocardz.com/newz/amd-to-...lution-rsr-technology-that-works-in-all-games

AMD Radeon Super Resolution will be released in January 2022. It will be supported by RDNA1 and RDNA2 architectures (RX 5000 and newer). (the technology will only work with games that support exclusive full-screen mode)
 
Just tested it for a bit and it allows the sharpening pass without scaling as well. BTW the update addressed a lot of minor issues in the game. Frame times are better by a substantial amount and lot's of other minor things cleaned up or corrected. Hopefully we'll get an expansion pack or two.
 
Guru3d also has a quick writeup- they are pretty impressed. Especially considering how little they thought of the first implementation.
 
Glad to see AMD making big progress on this. Now just get RT performance up to Nvidia’s and there is a good chance I’ll be sporting an AMD GPU next generation.
RT performance is what held me back from getting a 6900 XT instead of a 3090.
 
RT performance is what held me back from getting a 6900 XT instead of a 3090.
RT performance didn't matter to me at all thus why I got a 6900 XT, it also was way cheaper at the time then any 3090 was going for as well. I have yet to even try FSR on my card as well, just have not seen a need for it so far.
 
RT performance didn't matter to me at all thus why I got a 6900 XT, it also was way cheaper at the time then any 3090 was going for as well. I have yet to even try FSR on my card as well, just have not seen a need for it so far.

You know, I feel RT performance is important to me, however I don't actually play ANY games that use RT.

Currently playing Elden ring, and while it runs decently, those RT cores I seem to care so much about are doing nothing.

So I'm guessing that if someone replaced my 3080 with a 6800XT, I wouldn't REALLY notice.
 
You know, I feel RT performance is important to me, however I don't actually play ANY games that use RT.

Currently playing Elden ring, and while it runs decently, those RT cores I seem to care so much about are doing nothing.

So I'm guessing that if someone replaced my 3080 with a 6800XT, I wouldn't REALLY notice.
On PC, I only own two games with DLSS. Neither of them has raytracing. (Death Stranding and Nioh 2).
 
You know, I feel RT performance is important to me, however I don't actually play ANY games that use RT.
That's the marketing at work. Same with how suddenly everyone is concerned with being able to do machine learning on $300-400 gpus and how literally everyone is now a wannabe streamer and NEEDS nvenc.
 
That's the marketing at work. Same with how suddenly everyone is concerned with being able to do machine learning on $300-400 gpus and how literally everyone is now a wannabe streamer and NEEDS nvenc.
To be honest, a LOT of people stream and NVEnc is actually quite good....

ML AI cores are useless to most, though.
 
To be honest, a LOT of people stream and NVEnc is actually quite good....
Yeah but anyone semi serious about it should be thinking of a better way to do it. For most it seems to be a sideshow hobby. Idk, hardly anything I'd pay extra for I guess.
 
Yeah but anyone semi serious about it should be thinking of a better way to do it. For most it seems to be a sideshow hobby. Idk, hardly anything I'd pay extra for I guess.
NVENC is the best supported method, mostly because the enterprise supports it for a different use case. VDI desktop streaming is h.264 in real-time - that’s where NVENC came from. So it’s absurdly well studied and known.
 
To be honest, a LOT of people stream and NVEnc is actually quite good....

ML AI cores are useless to most, though.
There are more folks tinkering with CUDA than you would expect. It’s a minority for sure, but there are more and more folks on the [H] side playing with it
 
There are more folks tinkering with CUDA than you would expect. It’s a minority for sure, but there are more and more folks on the [H] side playing with it
Good for them. Still not something the average buyer of something such as a 3060 would care about outside of being a marketing point. Yet you run into the "but CUDA!" often enough where you'd swear everybody and their grandma is trying to do ML.

It's just marketing.
 
Good for them. Still not something the average buyer of something such as a 3060 would care about outside of being a marketing point. Yet you run into the "but CUDA!" often enough where you'd swear everybody and their grandma is trying to do ML.

It's just marketing.
Yup. CUDA wasn't really for gamers, PhysX or streamers, it was for bigger things outside of gaming. It just helped nVidia unify the architecture. The same is true for Tensor cores, they are mostly useless for gamers but great for bigger things related to AI. And yet again let them unify the architecture.

*shrug* at least the gaming side got something, even if it's minor outta of it all.
 
Back
Top