AMD 7900 GPU series reviews are up.

Status
Not open for further replies.
I ordered one from amazon on the 13th. still hasnt shipped. im getting desperate. haha
AMD has a daily release of 7900XTX on their website at 7am/10am PST/EST. If you're quick, you might be able to snag one. OEM model only, of course.

I recommend you go into the details page for the 7900XTX and start refreshing from there at the start of the hour. Reason being, I noticed the main page may say "Out of Stock", but at the same moment, the 7900XTX page will say "Add to Cart". I was able to get one added to my cart and purchased through Paypal this morning. Took a couple tries before it actually went into my cart vs saying unavailable. You'll have about a minute before everything is gone. Should be plenty of time if you're ready and your Paypal is preloaded on the PC or phone you're using so it goes right through without verification.
 
AMD has a daily release of 7900XTX on their website at 7am/10am PST/EST. If you're quick, you might be able to snag one. OEM model only, of course.

I recommend you go into the details page for the 7900XTX and start refreshing from there at the start of the hour. Reason being, I noticed the main page may say "Out of Stock", but at the same moment, the 7900XTX page will say "Add to Cart". I was able to get one added to my cart and purchased through Paypal this morning. Took a couple tries before it actually went into my cart vs saying unavailable. You'll have about a minute before everything is gone. Should be plenty of time if you're ready and your Paypal is preloaded on the PC or phone you're using so it goes right through without verification.
Yeah I've actually missed it the past 2 days. I will attempt tomorrow again.
 
Whelp. BestBuy finally had my XFX Reference 7900 XTX ready for pickup today and did not cancel due to the $899 pricing error. Popped it in when I got home a bit ago and gave it a couple quick 3dMark tests to mess around with some overclocking. With just a quick drop of voltage to 1070mv, 110% memory and +15% power limit while setting max frequency to 3000mhz, saw 10.4% increase in my Time Spy Extreme graphics score (15235) and 9.2% in Port Royal (16235) compared to stock settings. I'll mess with it more later, but that is pretty encouraging. I will say that the card looks tiny compared to the XFX Merc 319 6900XT that was in my case before. Fans are not as loud at 100% as I thought they would be, not that I would want to run them there though, but 60-70% is relatively quiet and where I will likely try to run them for gaming. For reference: With OC and at 70% fan gpu temp (in closed case) reached a peak of 60C in Port Royal and 81C at hot spot. I'd rate the stock cooler as decent, but I'd certainly prefer a better cooler like on the Nitro or TUF models.

PXL_20221222_074808451.jpg

Highest clock I saw in those quick runs was 2811mhz (average was of course lower with 2629mhz avg in Time Spy Extreme).
 
Last edited:
Nvidia wants to always have the best card, even if by a little, for a good reason. It's PR that pays them.

Everyone talks about the 4090, everyone knows it's the fastest. Even Joe Gamer who watches YouTube influencers knows that the 4090 is THE king. And what's great is this leaderboard-toppong card that gets Nvidia's name into the mouths of anyone 'in the know' and ALSO costs money to own.

So when AMD brought out the 6950XT that was SLIGHTLY faster than the vanilla 3090, Nvidia clocked the balls off of a new sku and released the 3090Ti.

Nvidia knows just how important the fastest card crown is.
Did you actually watch? Your reply would say "no"?
Your reply is fair. But touches on nothing.
 


I like this guys channel as he is quite hard core (Ernie & Bert haircut aside :D) Has a super ant-AMD bias but yet his numbers show that the 7900 series have legs but do loose on power draw. Note all cards in the review are MAXED-OUT but only the 7900XTX is labeled OC and not OC, so don't confuse and think they are all non-OC cards. My biggest surprise was how much power a 3090ti OC'ed sucked back. That thing is a fucking pig and i don't know why it was worshiped at all. Fermi 480 anyone? But Nvidia got that way under control for the 4000 series. It shows when Nvidia is actually threatened by the market and when they are not by how hard they push. Not so much this round.

Wow, hes easily one of the worst I have seen on the platform.
 
Wow, hes easily one of the worst I have seen on the platform.
Yeah, his demeanor is more than a bit off-putting. I still gleaned a bit of good info, though - the Hellhound XTX has a 425W max TBP and not 355.
 
Yeah, his demeanor is more than a bit off-putting. I still gleaned a bit of good info, though - the Hellhound XTX has a 425W max TBP and not 355.
Yep! He is a HardCore Oc'er that carters to E-Sports. Like him or hate him he Maximizes FPS on a given platform and usually shits on the rest. Very open about it.
Ya feel strongly? then go toe to toe bruh! My point all lost now (thanks!) is the AMD cards do perform well. As per review they do suck back to much power. But ya they cost less! Hence the balance per pricing.
As we all (should) know, it is a balancing act to buy the best for what you do (per game)!
 
So I did a little more messing with my reference 7900 XTX. I think mine needs more voltage if the clocks scale up. Example: I can run looping stress tests in 3dMark in any test and get 99.5% passing score with OC and voltage as low as 1050mv, however Assassin's Creed Valhalla will eventually crash at those settings unless I cap my max frequency at that voltage. In AC it will clock over 3Ghz at those settings (where as in 3dMark is in the 2.6-2.7ghz range), but AC seems to get unstable at around 2800mhz unless I increase the voltage to 1085mv. At that voltage is will sit in the 2900-2950mhz range most the time and be perfectly stable. I decided that with the Reference cooler I am not going to increase the power limit all the way. At +5% PL it still clocks well and with the undervolt to 1085mv the temps are just hanging onto what I would call acceptable after long gaming sessions (with the case and room warming up). At 70% fan the temps will slowly increase to 64-65C with a hot spot of 83-85C in a closed case.
 
I ordered one from amazon on the 13th. still hasnt shipped. im getting desperate. haha
Cancel it and order an RTX 4080. It costs a little more, but the Nvidia ecosystem features are worth it by far... Consider it a blessing in disguise your 7900xtx didn't ship. The 4080 is 3/4th of the price and performance of the 4090, so it's not a bad value.
 
Cancel it and order an RTX 4080. It costs a little more, but the Nvidia ecosystem features are worth it by far... Consider it a blessing in disguise your 7900xtx didn't ship. The 4080 is 3/4th of the price and performance of the 4090, so it's not a bad value.
it comes tuesday now, it would have been yesterday but the weather got in the way. im space limited and a 4080 wont fit. regardless of all the bad press the xtx is getting, i look forward to tinkering.
 
it comes tuesday now, it would have been yesterday but the weather got in the way. im space limited and a 4080 wont fit. regardless of all the bad press the xtx is getting, i look forward to tinkering.
Fair enough :). Enjoy your new card!
 
Yep! He is a HardCore Oc'er that carters to E-Sports. Like him or hate him he Maximizes FPS on a given platform and usually shits on the rest. Very open about it.
Ya feel strongly? then go toe to toe bruh! My point all lost now (thanks!) is the AMD cards do perform well. As per review they do suck back to much power. But ya they cost less! Hence the balance per pricing.
As we all (should) know, it is a balancing act to buy the best for what you do (per game)!
Bruh.
 
Wow, hes easily one of the worst I have seen on the platform.
Interesting RT benchmarks. Newer non-contaminated Nvidia manipulated RT games run good, maybe even on older AMD hardware as well -> go figure. Anyways from that short take, anyone with a 6900XT, 3090 level and up from last gen, seems to be a waste in general to upgrade other then to tinker and specific use cases.
 
Interesting RT benchmarks. Newer non-contaminated Nvidia manipulated RT games run good, maybe even on older AMD hardware as well -> go figure. Anyways from that short take, anyone with a 6900XT, 3090 level and up from last gen, seems to be a waste in general to upgrade other then to tinker and specific use cases.
I'd wager that it has more to due with the amount of RT (e.g. number of rays, RT shadows, etc) more than Nvidia manipulation.
 
I'd wager that it has more to due with the amount of RT (e.g. number of rays, RT shadows, etc) more than Nvidia manipulation.
How optimized, as in to take advantage of any platform advantages like Infinity cache on AMD for example. With all of these reviewers, I would think someone would analyze if there are any actual rendering differences when using RT on titles that perform well enough on both Nvidia and AMD hardware like Spiderman. As in the developer having different render targets for different GPUs? As for Nvidia manipulation, sponsorship, money, programming support, restrictions or influences -> plus they do own basically over 80% of the desktop RT hardware, which needs good driver support for your title as well, who really knows and who would expose anything like manipulation other other than anecdotal.

Real RT performance, a.k.a VRay and others that are actually RT lighting -> Nvidia is definitely superior. In games using a hybrid approach can get interesting.
 
Last edited:
How optimized, as in to take advantage of any platform advantages like Infinity cache on AMD for example. With all of these reviewers, I would think someone would analyze if there are any actual rendering differences when using RT on titles that perform well enough on both Nvidia and AMD hardware like Spiderman. As in the developer having different render targets for different GPUs? As for Nvidia manipulation, sponsorship, money, programming support, restrictions or influences -> plus they do own basically over 80% of the desktop RT hardware, which needs good driver support for your title as well, who really knows and who would expose anything like manipulation other other than anecdotal.

Real RT performance, a.k.a VRay and others that are actually RT lighting -> Nvidia is definitely superior. In games using a hybrid approach can get interesting.
The scale has been pretty simplistic the last few generations:
The more simultaneously raytracing effects the better NVIDIA cards fair against AMD cards.
You can see this very easy, in games that are "RT Light" (only RT shadows/reflections) NVIDIA / AMD do not show that big of a performance delta but when you look at "RT Heavy" games that does RT global illumination and/or ambient occlusion on top of shadows and reflections the performance delta increases a lot in NVIDIA's favour.
That sadly leads a lot of gamers to assume that game X is coded better than game Y due to them not understading the nature the SKU's involved, when it is simply just the number of simultaneously effects that determines the performance delta.
Also the quality of effects have a great variety between games.
If you compare the raytraced reflections in Spiderman and CyberPunk 2077 it becomes very obvious that the reflections in spiderman are of a lesser quality, does not reflect the entire world and runs at lower FPS than the main game, so that is also an factor that needs to be considered.

These factors are why AMD SKU's tanks in path traced games, not because of "evil" code even though some people seem to have a bias for "conspiracies".

And you wouldn't want delvelopers having to write code to target SKU's caces as that would be kinda insane.
Look at how DX12 (aka "close to the metal") has caused developers performance issues.
You want the SKU to handle L1, L2, L3 etc. cache, not the developers.
 
Last edited:
The scale has been pretty simplistic the last few generations:
The more simultaneously raytracing effects the better NVIDIA cards fair against AMD cards.
You can see this very easy, in games that are "RT Light" (only RT shadows/reflections) NVIDIA / AMD do not show that big of a performance delta but when you look at "RT Heavy" games that does RT global illumination and/or ambient occlusion on top of shadows and reflections the performance delta increases a lot in NVIDIA's favour.
That sadly leads a lot of gamers to assume that game X is coded better than game Y due to them not understading the nature the SKU's involved, when it is simply just the number of simultaneously effects that determines the performance delta.
Also the quality of effects have a great variety between games.
If you compare the raytraced reflections in Spiderman and CyberPunk 2077 it becomes very obvious that the reflections in spiderman are of a lesser quality, does not reflect the entire world and runs at lower FPS than the main game, so that is also an factor that needs to be considered.

These factors are why AMD SKU's tanks in path traced games, not because of "evil" code even though some people seem to have a bias for "conspiracies".

And you wouldn't want delvelopers having to write code to target SKU's caces as that would be kinda insane.
Look at how DX12 (aka "close to the metal") has caused developers performance issues.
You want the SKU to handle L1, L2, L3 etc. cache, not the developers.
Writing code to target SKUs? Remember how Half-Life 2 had different rendering codepaths on release?

-DirectX 9.0/SM 2.0: Default for Radeon 9x00/Xx00, GeForce 6x00 and everything later, and the way the game was intended to look at release.
-DirectX 8.1/SM 1.1: Default for GeForce FX 5x00 because they suck at SM 2.0 like AMD cards do for RT today, and for GeForce 3/4 Ti and Radeon 8500 due to architectural limitations.
-DirectX 7.0/HT&L with no shaders: Default for GeForce 4 MX because it was a glorified GeForce 2 architecture with no shaders!

Yeah, I remember those dark days. Not every PC game was as accommodating back then; some would just refuse to run if you didn't have programmable shaders, others would technically work but at such a slideshow framerate that they may as well not have bothered, and still others only had a SM 2.0 codepath and had GeForce FX owners seriously regretting their purchases. (Remember, a GeForce FX 5950 Ultra - then a $500 top-of-the-line card - performed only about as well as the Radeon 9600 - a $200 card - in SM 2.0, and the 9800 XT dominated it with twice the framerate!)

I'm dearly hoping RDNA 3 is not another NV30/GeForce FX moment. I find the 7900 XTX's RT performance adequate enough at 1080p, so I'll make do for a few years until we get better GPU architectures at hopefully less egregious prices from both NVIDIA and AMD (and hell, maybe even Intel too), since I think we're still at the point that no game dev is foolish enough to require an exceedingly competent RT card to run their game at all, especially when the next-gen consoles still struggle with it on RDNA 2 and all.
 
  • Like
Reactions: noko
like this
Writing code to target SKUs? Remember how Half-Life 2 had different rendering codepaths on release?

-DirectX 9.0/SM 2.0: Default for Radeon 9x00/Xx00, GeForce 6x00 and everything later, and the way the game was intended to look at release.
-DirectX 8.1/SM 1.1: Default for GeForce FX 5x00 because they suck at SM 2.0 like AMD cards do for RT today, and for GeForce 3/4 Ti and Radeon 8500 due to architectural limitations.
-DirectX 7.0/HT&L with no shaders: Default for GeForce 4 MX because it was a glorified GeForce 2 architecture with no shaders!

Yeah, I remember those dark days. Not every PC game was as accommodating back then; some would just refuse to run if you didn't have programmable shaders, others would technically work but at such a slideshow framerate that they may as well not have bothered, and still others only had a SM 2.0 codepath and had GeForce FX owners seriously regretting their purchases. (Remember, a GeForce FX 5950 Ultra - then a $500 top-of-the-line card - performed only about as well as the Radeon 9600 - a $200 card - in SM 2.0, and the 9800 XT dominated it with twice the framerate!)

I'm dearly hoping RDNA 3 is not another NV30/GeForce FX moment. I find the 7900 XTX's RT performance adequate enough at 1080p, so I'll make do for a few years until we get better GPU architectures at hopefully less egregious prices from both NVIDIA and AMD (and hell, maybe even Intel too), since I think we're still at the point that no game dev is foolish enough to require an exceedingly competent RT card to run their game at all, especially when the next-gen consoles still struggle with it on RDNA 2 and all.
Prices will only increase in the future.
Process (cost per transistor) cost is going up, mask prices are going up, design is getting more complicated.
Anybody thinking the prices will "go back to the past" is only deluding themselfs.

And there where even more "horror" back in the start of 3D cards.
You had to pach games to have your card functional (eg. a Glide patch).
DirectX was created to move away from that state of gamning.
 
Prices will only increase in the future.
Process (cost per transistor) cost is going up, mask prices are going up, design is getting more complicated.
Anybody thinking the prices will "go back to the past" is only deluding themselfs.

And there where even more "horror" back in the start of 3D cards.
You had to pach games to have your card functional (eg. a Glide patch).
DirectX was created to move away from that state of gamning.
To be fair, if you're gaming at 1080p, on a 24 inch monitor at 60fps, gaming has never been cheaper!
 
To be fair, if you're gaming at 1080p, on a 24 inch monitor at 60fps, gaming has never been cheaper!
Indeed, the performance/cost at 1080p has never been better.
But people feel entitled today.
Not they want 4K 120 FPS gaming for less money.
I should not mention what I paid for my first GVP card for Amiga 1200 (+$500)...or the first 8 MB RAM (+$400] stick I got in the early 1990's.

But people today seem a lot more entitled / whiney and blame nefarius motives rather than their entitlement / lack of knowlegde.
They even seem angry when people don't agree with their "feelings of a fair price" and pay MSRP.
 
The scale has been pretty simplistic the last few generations:
The more simultaneously raytracing effects the better NVIDIA cards fair against AMD cards.
You can see this very easy, in games that are "RT Light" (only RT shadows/reflections) NVIDIA / AMD do not show that big of a performance delta but when you look at "RT Heavy" games that does RT global illumination and/or ambient occlusion on top of shadows and reflections the performance delta increases a lot in NVIDIA's favour.
That sadly leads a lot of gamers to assume that game X is coded better than game Y due to them not understading the nature the SKU's involved, when it is simply just the number of simultaneously effects that determines the performance delta.
Also the quality of effects have a great variety between games.
If you compare the raytraced reflections in Spiderman and CyberPunk 2077 it becomes very obvious that the reflections in spiderman are of a lesser quality, does not reflect the entire world and runs at lower FPS than the main game, so that is also an factor that needs to be considered.

These factors are why AMD SKU's tanks in path traced games, not because of "evil" code even though some people seem to have a bias for "conspiracies".

And you wouldn't want delvelopers having to write code to target SKU's caces as that would be kinda insane.
Look at how DX12 (aka "close to the metal") has caused developers performance issues.
You want the SKU to handle L1, L2, L3 etc. cache, not the developers.
That may sound logical to you, appears to make sense, except Mantle, Vulcan and Metal purpose is to get less layers between the code and hardware where the programmers have more direct access to the hardware to get the most performance out of it. Since there are key differences between hardware, you bet the developer has to do extra work to get performance out of it or otherwise it will run like crap on certain hardware combinations.

Now some put green horn rim glasses on, others red and maybe a few blue. Where data presented is cherry picked or pickled picked. One just has to do the best they can and actually look at the data and sometimes a lot of data from different sources in order to pick a right combo for what will give them the most for what they can afford.

Lovelace has a gaming RT problem which can even present itself even at 4K. Wat! Where did I get that from? Everyone knows (assumes) the 4090 in RT kicks AMD in all RT games, right? Here are some interesting examples from Guru3d.com at 1080p using a Ryzen 3 5950, a powerful system which many have built in the last couple of years and even only a few months ago. Not limited to these examples.
https://www.guru3d.com/articles-pages/xfx-merc-310-radeon-rx-7900-xtx-review,1.html

WatchDog.png
  • The 7900 XTX, Merc in this case, with Ultra quality settings in Watch Dogs Legion with RT Ultra on is 7% faster than a 4090 at this resolution, even the 7900 XT beats the 4090​
Formula1.png
  • The 7900 XTX Merc, is 18%! faster in RT over the 4090 for Formula 1 with Ultra High RT​

Lovelace scheduler is much less efficient than AMDs and it is limiting in certain cases as shown above. Now at higher resolutions Nvidia does better and better compared to AMD except one problem, the frame rate with raytracing in many cases are rather low and needs to use upscaling technology which then puts this problem right back in place with Lovelace, with DLSS rendering at lower resolutions the CPU usage inefficiently starts to hit while AMD can do more with a lesser CPU.

So buyer beware on what you are buying and what system you going to use it in and at what resolution. For very fast high frame rate at lower resolutions, maybe a more competitive use scenario with RT on the side for some games with a last gen CPU -> I would say AMD may be the better choice besides the cost savings. I don't think there is a CPU out yet that can drive the 4090 effectively, not because it is this so awesome level hardware but has a subpar schedular that relies too much on the CPU. I would suspect Zen 4 with VCache should give the 4090 what it needs. Now having to redo your whole system in order to use a rather high priced graphics card fully is actually kinda funny but you would have one hell of a gaming system. Too bad there is only one HDMI 2.1 port to take advantage of it.
 
Last edited:
Does it really surprise anyone that 2 games that were designed to have ray-tracing on consoles have the ray-tracing equivalent of a wet noddle and run adequately on inferior hardware? Like, seriously? And on top of that infer that the 4090 has a ray-tracing problem because of cherry picked the outliers?

picard-facepalm.jpg
 
That may sound logical to you, appears to make sense, except Mantle, Vulcan and Metal purpose is to get less layers between the code and hardware where the programmers have more direct access to the hardware to get the most performance out of it. Since there are key differences between hardware, you bet the developer has to do extra work to get performance out of it or otherwise it will run like crap on certain hardware combinations.

Now some put green horn rim glasses on, others red and maybe a few blue. Where data presented is cherry picked or pickled picked. One just has to do the best they can and actually look at the data and sometimes a lot of data from different sources in order to pick a right combo for what will give them the most for what they can afford.

Lovelace has a gaming RT problem which can even present itself even at 4K. Wat! Where did I get that from? Everyone knows (assumes) the 4090 in RT kicks AMD in all RT games, right? Here are some interesting examples from Guru3d.com at 1080p using a Ryzen 3 5950, a powerful system which many have built in the last couple of years and even only a few months ago. Not limited to these examples.
https://www.guru3d.com/articles-pages/xfx-merc-310-radeon-rx-7900-xtx-review,1.html

  • The 7900 XTX, Merc in this case, with Ultra quality settings in Watch Dogs Legion with RT Ultra on is 7% faster than a 4090 at this resolution, even the 7900 XT beats the 4090​
  • The 7900 XTX Merc, is 18%! faster in RT over the 4090 for Formula 1 with Ultra High RT​

Lovelace scheduler is much less efficient than AMDs and it is limiting in certain cases as shown above. Now at higher resolutions Nvidia does better and better compared to AMD except one problem, the frame rate with raytracing in many cases are rather low and needs to use upscaling technology which then puts this problem right back in place with Lovelace, with DLSS rendering at lower resolutions the CPU usage inefficiently starts to hit while AMD can do more with a lesser CPU.

So buyer beware on what you are buying and what system you going to use it in and at what resolution. For very fast high frame rate at lower resolutions, maybe a more competitive use scenario with RT on the side for some games with a last gen CPU -> I would say AMD may be the better choice besides the cost savings. I don't think there is a CPU out yet that can drive the 4090 effectively, not because it is this so awesome level hardware but has a subpar schedular that relies too much on the CPU. I would suspect Zen 4 with VCache should give the 4090 what it needs. Now having to redo your whole system in order to use a rather high priced graphics card fully is actually kinda funny but you would have one hell of a gaming system. Too bad there is only one HDMI 2.1 port to take advantage of it.

You just confirmed my points?
The less raytracing load, the less delta?
Try posting NON 1080p benchmarks from that site.
You know the 1440p/4K tests.

If you want a good read about how "closer to the metal" is not always the optimal solution this would be a good place to start:
https://forum.beyond3d.com/threads/no-dx12-software-is-suitable-for-benchmarking-spawn.58013/
 
Does it really surprise anyone that 2 games that were designed to have ray-tracing on consoles have the ray-tracing equivalent of a wet noddle and run adequately on inferior hardware? Like, seriously? And on top of that infer that the 4090 has a ray-tracing problem because of cherry picked the outliers?

View attachment 537697
Well for sure then the superior RT card and rasterization should not even hiccup on such a light load. 🙃 I see, if it performs like crap on AMD it must be heavy, if it performs well, it is lite and not even RT.

Nvidia has a hardware limitation which can only be overcome by the most expensive fastest processors and memory. The lower skus will be interesting combined with more budget friendly CPUs. Of course they will be tested with ridiculous configured systems.
 
Well for sure then the superior RT card and rasterization should not even hiccup on such a light load. 🙃 I see, if it performs like crap on AMD it must be heavy, if it performs well, it is lite and not even RT.

Nvidia has a hardware limitation which can only be overcome by the most expensive fastest processors and memory. The lower skus will be interesting combined with more budget friendly CPUs. Of course they will be tested with ridiculous configured systems.


Instead putting facepalm meme, why don't you try to counter his arguments.
Indeed it's quite confusing that the "Superior RT card" get lower performance on lower resolution.
If the RT implementation in those 2 games are light RT, then the "Superior RT card" should not have a hard time beating the "Inferior" RX 7900 card.

Side note:
If you compare the rx 7900 series to rx 6900 xt, the performance gaps on those two reflect the results on other games.
 
Instead putting facepalm meme, why don't you try to counter his arguments.
Indeed it's quite confusing that the "Superior RT card" get lower performance on lower resolution.
If the RT implementation in those 2 games are light RT, then the "Superior RT card" should not have a hard time beating the "Inferior" RX 7900 card.

Side note:
If you compare the rx 7900 series to rx 6900 xt, the performance gaps on those two reflect the results on other games.
I already addressed his asinine post, but it was deleted because it was off topic, even though his post is 100% off topic discussing the 4090's RT performance in a thread talking about 7900 GPU reviews. I won't entertain the foolishness again. If you missed it, I apologize.

Here's the actual performance if you even care:

index.png
index.png
 
I already addressed his asinine post, but it was deleted because it was off topic, even though his post is 100% off topic discussing the 4090's RT performance in a thread talking about 7900 GPU reviews. I won't entertain the foolishness again. If you missed it, I apologize.

Here's the actual performance if you even care:

View attachment 538157View attachment 538158
Foolishness looks more like your post. Turn on DLSS and then FSR2, getting more smoother and playable framerates. Making those Lovelace cards to render at 1440p or 1080p and see how well the 7900XTX catches up.

What, you don't want to discuss a 7900 XTX review showing RT performance? When it is kicking Nvidia's ass in a given scenario? Someone buying the MSI 1440p OLED, 240HZ monitor with the fastest pixel response time of any monitor for example with previous generation system will most likely be better off for gaming and maybe even RT with AMD this round.

Better yet maybe a good deal on a 6900XT or 3090.
 
Status
Not open for further replies.
Back
Top