[AMD] AMD shows more gaming benchmarks on RX 6800, 6800 XT and 6900 XT, at 4K and 1440p.

Faster RAM is faster, but I think the idea came from when Ryzen first launched. There was a much bigger delta on performance with the AMD platform.

I think as the AM4 architecture matured, I'm not sure how true that is anymore. But it was true at one point.
There were lots of RAM tests with skylake showing very tangible performance gains. Its why I still got a z170 board, even though my first Skylake was a dual core/4 thread i3 with locked cores.

I dunno. But Intel likes fast RAM too!
 
Yes, there are ram benefits to both Intel and Ryzen, but Ryzen had horrible latency issues, so RAM speeds had more of an affect. Also, Intel does like faster ram, but it starts doing less and less much quicker than an AMD system. Also, tweaking all the individual timings doesn't typically make as big a difference either. It's not that you don't get performance gains, it's just they aren't as pronounced (in certain loads, things that are bandwidth limited respond more linearly, but things that are random don't make as bit a difference). Anyways, like said, there IS a difference, especially if you're comparing 2400MHz to something like 3600MHz, so it isn't really true that Intel doesn't respond to memory speeds, but that's where the idea that it's not as useful comes from (based in some truth, but probably overstated).
 
Again, we have to look at your boost clocks. The FE cards are not reference designs, actually. They generally boost higher than reference. So even with your FE at "stock", it still is likely performing better than a reference clocked 3090. Such as the Zotac 3090 Trinity I linked above. That card stays pretty tight around 1755mhz. Reference clocks for a 3090 is 1.7. So as I said, that Zotac is more or less "reference" clocks.

All that said, indeed, we do not know the map/mode which AMD tested in. But I am betting if you tested with your card staying around 1.7, you would see a further drop over your stock FE numbers.

Additionally, its possible Zen 3 isn't as good in that game as Intel's 10 series.

Sorry but you're wrong
The FE *IS* 100% reference clocks and boost clocks. The FE just has a slightly higher power limit than most other stock "reference" boards but not as high as the high end 3 PCIE pin AIB cards.
The only AIB cards that have higher boost clocks are the OC cards or the high end cards like the Strix or FTW3.
All of this information is readily accessible on techpowerup.
 
Sorry but you're wrong
The FE *IS* 100% reference clocks and boost clocks. The FE just has a slightly higher power limit than most other stock "reference" boards but not as high as the high end 3 PCIE pin AIB cards.
The only AIB cards that have higher boost clocks are the OC cards or the high end cards like the Strix or FTW3.
All of this information is readily accessible on techpowerup.
I grantee the FE has higher average boost than that Zotac. Its too bad Techpowerup wasn't able to review an FE, to show their boost graph for it. But other reviews such as Toms', at least indicate a relatively higher boost on the FE.

What are your boost clocks before manual overclock or manual power limit tweaking?

Its important because some GPU limited games are actually very sensitive to MHZ boosts for the GPU core. and CoD is one of those games.
 
Last edited:
I grantee the FE has higher average boost than that Zotac. Its too bad Techpowerup wasn't able to review an FE, to show their boost graph for it. But other reviews such as Toms', at least indicate a relatively higher boost on the FE.

What are your boost clocks before manual overclock or manual power limit tweaking?

Its important because some GPU limited games are actually very sensitive to MHZ boosts for the GPU core. and CoD is one of those games.

GPU clock: 1395 (stock). 1695 (boost).
Stock clock: 1395, Stock boost: 1695.
At stock clocks and stock power limit, COD goes up to 1950 mhz at 55C, Voltage is 1.081v.
Screenshot is reset clocks to stock. You can see this matches the reference spec exactly.

The absolute highest I've been able to run COD stable is +135 core with PL at 114% for 400W. Here the core goes from 2085 mhz to 2070 mhz (seems to happen around 58C). +150 core eventually crashes with DEV error 6068, even with the Afterburner slider at 100%. The Seasonic 12 pin adapter helped me gain a tiny bit more overclock as previously only +105 was stable, but I also changed to the hotfix driver after, which gave me even higher stable clocks (I already saw +15mhz improvement from the Seasonic cable when on the first Studio driver).

COD MW/Warzone is the only game so far that never hits 400W at 1080p at completely maxed settings and DXR, so the GPU is purring along at very steady clocks.

(the slider only seems to allow +1 voltage bin and +15 mhz clock bin automatically if the chip is below a certain temp).

gpu_z.jpg
 
At stock clocks and stock power limit, COD goes up to 1950 mhz at 55C, Voltage is 1.081v.
Screenshot is reset clocks to stock. You can see this matches the reference spec exactly.


View attachment 295346
is that 1950 a momentary max frequency or does it hold fairly close to that?

Because I suspect AMD is testing a card which averages closer to the 1.7 reference spec.

Techpowerup shows the Zotac 3090 trinity averaging about 1755mhz. If you can limit your card to that, I think that could get you down to AMD's posted numbers.
 
Last edited:
is that 1950 a momentary max frequency or does it hold fairly close to that?

Because I suspect AMD is testing a card which averages closer to the 1.7 reference spec.

Techpowerup shows the Zotac 3090 trinity averaging about 1755mhz. If you can limit your card to that, I think that could get you down to AMD's posted numbers.

Oh....the Zotac cards boost lower than reference cards by default, due to the "crashing" problem. This was posted in an article some time back, sorry I do not have the links. It was discussed on youtube during the whole MLCC/SPcaps thing on gamersnexus and some other sites.

Anyway, There are two boost clocks.
1395 mhz is the base clock. 1695 mhz is the main boost clock. There is also "GPU Boost 3.0". that is what the card will boost to PAST 1695 mhz, if the temps and power budget (power limit) allow.

1950 mhz is what the card will boost to around 50-60C. Past 60C it drops another 15 mhz. I think it happens every 6C or 8C, I forgot. I know on Pascal, this started at 38C. The clocks are in 15 mhz steps.
 
Oh....the Zotac cards boost lower than reference cards by default, due to the "crashing" problem. This was posted in an article some time back, sorry I do not have the links. It was discussed on youtube during the whole MLCC/SPcaps thing on gamersnexus and some other sites.

Anyway, There are two boost clocks.
1395 mhz is the base clock. 1695 mhz is the main boost clock. There is also "GPU Boost 3.0". that is what the card will boost to PAST 1695 mhz, if the temps and power budget (power limit) allow.

1950 mhz is what the card will boost to around 50-60C. Past 60C it drops another 15 mhz. I think it happens every 6C or 8C, I forgot. I know on Pascal, this started at 38C. The clocks are in 15 mhz steps.
It looks like CODMW also has some driver issues with Nvidia and fullscreen windowed. And also some issues which can cause it to run in full screen windowed even when that isn't intended or even flicker back and forth between exclusive fullscreen and fullscreen windowed. and finally, Nvidia's reflex setting in COD lowers framerates. I wonder if AMD experienced these issues during testing and/or if reflex mode enables by default, if you simply toggle "ultra" settings.

 
So this is the most I can give you.
2560x1440 max settings, Ultra, 100% render scale, DirectX raytracing enabled.

At stock settings and stock power limit on RTX 3090 FE (I assume 350W) I averaged 171 FPS on warzone on the first game and 169 fps on the second game (this game was mostly with me being around the Storage Area)
i9-10900k @ 4.7 ghz and RAM At 4400 mhz 2x16 GB, 16-17-17-37 primary timings, AIDA read 68k, write 68k, copy 66.5k, 35.9ns latency (Mind you this AIDA was tested at 5.2 ghz core, 4.8 cache, while my Warzone game was 4.7 ghz core, 4.4 cache).

With 400W power limit (114%) and +120 core and +600 memory, I scored 182 fps average (First part around the airport, after I finished the gulag, around the southeastern Farmland)..
Either way, that's way above AMD's 148 FPS average, so AMD's test is completely useless until we know what exactly AMD tested. Because if there is some sort of "benchmark mode" that someone claimed AMD used, I don't think regular gamers on battle.net have access to it.
With that memory setup it'd be hard to not shit on a standard reviewer's system.
 
So this is the most I can give you.
2560x1440 max settings, Ultra, 100% render scale, DirectX raytracing enabled.

At stock settings and stock power limit on RTX 3090 FE (I assume 350W) I averaged 171 FPS on warzone on the first game and 169 fps on the second game (this game was mostly with me being around the Storage Area)
i9-10900k @ 4.7 ghz and RAM At 4400 mhz 2x16 GB, 16-17-17-37 primary timings, AIDA read 68k, write 68k, copy 66.5k, 35.9ns latency (Mind you this AIDA was tested at 5.2 ghz core, 4.8 cache, while my Warzone game was 4.7 ghz core, 4.4 cache).
When I went to the AMB benchmark page I did not see Warzone listed. Have you benchmarked any of the games that AMD tested?
 
He is probably talking about Call of Duty and not Forza 4, if you go to the link:
https://www.amd.com/en/gaming/graphics-gaming-benchmarks

Choose call of Duty and 1440p, it shows an average of 148.4 FPS for the 3090, on an AMD 5900x
I went back to reread his post and I saw this
"148.4 FPS average on a RTX 3090 at 1440p? "
He said "aveage". Did he look at the AMD benchmark page and averaged all the scores? Maybe he will clarify where he got the 148.4 FPS number.
 
He said "aveage". Did he look at the AMD benchmark page and averaged all the scores? Maybe he will clarify where he got the 148.4 FPS number.
148.4 fps is exactly the number for Call of Duty at 1440p that would be really quite the conciendance, specially that he refer to maps right after, average is for the average FPS in that specific game.
 
Sorry but you're wrong
The FE *IS* 100% reference clocks and boost clocks.
This has been explained before. This generation the FE boards are not reference.

This is from Tom's Hardware
"Also confirming that FE boards are not equal to reference boards"
This is from Kyle

tangoseal said:
Whats the real difference in founders and reference because EK says thier block works on reference but not founders. I thought founders was as reference as you can get.
"Nope, FE cards are FE cards only this round to my knowledge. None of the kits sent out by NVIDIA have been for the small "V" PCBs. I do not think it has even been offered to AIBs."
posted here
https://hardforum.com/threads/asus-rog-strix-3090-400w-gpu.2000841/#post-1044712989
 
148.4 fps is exactly the number for Call of Duty at 1440p that would be really quite the conciendance, specially that he refer to maps right after, average is for the average FPS in that specific game.
Makes sense, I noticed that in his subsequent posts he talked about COD.
 
6900XT

For the cost of a good AIB 3090 ($1700) you can pick up 6900XT ($999) + Waterblock ($150) + 5900X ($549) = $1698
AND still get better performance than a 3090 in majority of games.

Gap will only widen as time goes on and games are more optimized for AMD CPU's/GPU's due to XBOX Series X and PS5 sharing the same AMD architecture/technologies as AMD GPU's/CPU's.

EZ decision.
 
Last edited:
Stryker7314 lets just hope the drivers and stability issues do not happen like they did with the 5700 line that basically screwed AMD and anyone from wanting to buy those GPUs...
 
Stryker7314 lets just hope the drivers and stability issues do not happen like they did with the 5700 line that basically screwed AMD and anyone from wanting to buy those GPUs...

The drivers thing has been overblown, I have had nvidia for several generations, and they have constant issues with drivers. Just check nvidia driver release threads and their drivers release notes themselves. It's a wash at this point with drivers issues. There are plenty of folks running 5700 GPU's without problems, you state it like the product was a failure when it wasn't at all. If anything nvidia will have the driver issues with their different architecture and everything being made for AMD due to consoles.
 
The drivers thing has been overblown, I have had nvidia for several generations, and they have constant issues with drivers. Just check nvidia driver release threads and their drivers release notes themselves. It's a wash at this point with drivers issues. There are plenty of folks running 5700 GPU's without problems, you state it like the product was a failure when it wasn't at all. If anything nvidia will have the driver issues with their different architecture and everything being made for AMD due to consoles.

What you are talking about are the regular driver issues that both companies have. The problem with AMD is that in the past they have had issues over and above the usual. Once they got over the bad start to 290x/290 cards they have been a lot better, with only one bad period from December 2017 to the end of January 2018. When they released a driver that messed up performance for a month and then released a driver to fix that, that made dx9 games unplayable.

But, they were doing really good apart from that. They were much faster at releasing game ready drivers, they were much faster at getting full performance of their cards. Until they released Navi. The black screen issues are well documented. There were a huge number of complaints on their forums, reddit, etc. The issue wasn't overblown. AMD came out and acknowledged the problem and eventually fixed it.

Nobody is saying that Nvidia doesn't have driver issues, of course they do.
 
The drivers thing has been overblown, I have had nvidia for several generations, and they have constant issues with drivers. Just check nvidia driver release threads and their drivers release notes themselves. It's a wash at this point with drivers issues. There are plenty of folks running 5700 GPU's without problems, you state it like the product was a failure when it wasn't at all. If anything nvidia will have the driver issues with their different architecture and everything being made for AMD due to consoles.
Not overblown and let's not rewrite history. 5700XT black screen issues took four months for AMD to finally acknowledge -- the delay was what infuriated lots of people.

That said, AMD past driver issues wouldn't concern me enough not to buy a 6800XT. New product should be evaluated case by case on its own merits. Companies improve.
 
Last edited:
Not overblown and let's not rewrite history. 5700XT black screen issues took four months for AMD to finally acknowledge -- the delay was what infuriated lots of people.

That said, AMD past driver issues wouldn't concern me enough not to buy a 6800XT. New product should be evaluated case by case on its own merits. Companies improve.
What you are talking about are the regular driver issues that both companies have. The problem with AMD is that in the past they have had issues over and above the usual. Once they got over the bad start to 290x/290 cards they have been a lot better, with only one bad period from December 2017 to the end of January 2018. When they released a driver that messed up performance for a month and then released a driver to fix that, that made dx9 games unplayable.

But, they were doing really good apart from that. They were much faster at releasing game ready drivers, they were much faster at getting full performance of their cards. Until they released Navi. The black screen issues are well documented. There were a huge number of complaints on their forums, reddit, etc. The issue wasn't overblown. AMD came out and acknowledged the problem and eventually fixed it.

Nobody is saying that Nvidia doesn't have driver issues, of course they do.

And still overblown... :hilarious:
 
Back
Top