"How We Test: CPU Benchmarks, Misconceptions Explained", aka "why you don't want to be GPU-limited when doing CPU testing"

1_rick

Supreme [H]ardness
Joined
Feb 7, 2017
Messages
5,402
Interesting article from TechSpot about CPU testing with high-end GPUs and low resolution.

"The push back to this approach is the alleged claim that it's "unrealistic," and that no one is going to use an RTX 4090 at 1080p, and certainly no one is going to do this with a mid-range to low-end CPU. "

The arguments will be familiar to anyone who's been around here for a while and TechSpot's position is basically "we know what you're saying and you're wrong."

https://www.techspot.com/article/2618-cpu-benchmarks-explained/

They did stuff like retest i7-8700K vs R5 2600X with a 1070, 3060, 3080, and 4090 at ultra high 1080p, taking on arguments that "the 1080 is a bad GPU to pair with a 2600X, so you should test it with a 1070 instead" and explain, with graphs, why that doesn't give a good picture of the relative power of the to CPUs in gaming (the faster the GPU, the more the 8700K pulls ahead; on the 1080, they're 75fps to 73fps, but on a 4090, it's 120 to 99.) Warhammer: Vermintide 2 at 1080p extreme had the same results: the two CPUs have nearly indistinguishable resolts on the 1080, ballooning out to a 30% advantage with a 4090.

"Using the medium quality preset we found back then that the Core i7-8700K was 17% faster on average, which is not miles off what we're seeing here with the RTX 3060, which provided a 23% margin between the two CPUs. The margin was much the same with the GeForce RTX 3080, and up to 29% faster with the RTX 4090.

It's made clear then that the GeForce GTX 1080 Ti tested at 1080p gave us a much better indication (over five years ago) as to how things might look in the future, when compared to a more mid-range product like the GTX 1070."
 
To drive it home, they also compare an i3-13100 with an i9-13900K with four different GPUs and get basically the same results: with something like a 3060 at 1080p high, you get exactly the same frame rate. Even with a 3080, the i9's not a lot better, but with a 4090, you get 226 vs 148 fps.
 
Taking advantage of "more cores" scenarios can be quite complicated, and maybe "ignored" until high core counts become "the norm" console wise. Then maybe, it becomes a more interesting issue as perhaps games will assume 8 "good" cores or more. Until then, you can save some money for many gamers.
 
The number of people that argue against not being GPU limited when testing CPU's and attempting as much as possible to not be CPU limited when testing a GPU even in these forums where people should no better never cease to amaze me.

If you do a real world 1:1 test you know only how that one combination of CPU and GPU will perform.

If you isolate the CPU and GPU while testing, you can predict the performance of almost any combination of CPU anf GPU.
 
....{Glances over at his 8700k}.....That'll do Pig, that'll do..........nice knowing it has a little room to grow when my 1080 Ti (roughly a step below or trading blows with a bog standard 3600 if I'm not mistaken) gets replaced someday.......
 
If you do a real world 1:1 test you know only how that one combination of CPU and GPU will perform.
And that is literally the only thing that really matters to me. I don't care about theoreticals or predictions. To nobody's surprise I always have a combination of a GPU and a CPU in my system and not a CPU with a theoretically infinitely fast GPU.
I'm not saying they shouldn't run CPU tests at low resolution, but I only see that as a curiosity. What I really want to know is when the CPU bottleneck switches to a GPU bottleneck, I don't care what's the max theoretical performance of the CPU is for gaming, as I'll never see that in real world situations. All I need to know if the CPU is fast enough to not hold back the GPU at my preferred resolution.
 
And that is literally the only thing that really matters to me. I don't care about theoreticals or predictions. To nobody's surprise I always have a combination of a GPU and a CPU in my system and not a CPU with a theoretically infinitely fast GPU.
I'm not saying they shouldn't run CPU tests at low resolution, but I only see that as a curiosity. What I really want to know is when the CPU bottleneck switches to a GPU bottleneck, I don't care what's the max theoretical performance of the CPU is for gaming, as I'll never see that in real world situations. All I need to know if the CPU is fast enough to not hold back the GPU at my preferred resolution.

So, if a review site tests a CPU and they use a different GPU than you have or different settings than you play at, then that review is totally useless to you.

It might work for you if you use very common mainstream hardware and settings, but It is highly impractical.

I find that it almost never happens that a reviewer reviews my exact hardware configurations and settings. (In fact, I haven't seen that happen even once in at least 12 years)

I have a Threadripper 3960x. No reviewers do GPU reviews on threadrippers. Before that I had an aging i7-3930k overclocked to 4.8Ghz. it was ultra rare to see an i7-3930k as a review platform even when it was new, and even if it ever happened to they were never tested at my 4.8Ghz

Also, when I buy a CPU it is going to last me through several GPU upgrades. When I bought my i7-3930k I had dual Radeon HD 6970's in crossfire. The. i got a single 7970, followed by a single GTX680, followed by the original 6GB Kepler Titan, followed by dual 980TI's in SLI and finally my Pascal Titan X before I finally retired my x79 platform.

It is utterly useless to me to see how a CPU I am considering buying performs with just a single GPU, even if it is the single GPU I have today (which usually isn't the case), because 2-5 more GPU's are probably going in that CPU/motherboard combo before it retires. I want to see how much more margin above my current GPU ut has in it, and determine if it will be the best choice of what's out there for my next 2-5 GPU upgrades. If another CPU had more headroom in it, I want to know that and will likely choose it instead.

I find the value of any review is greater the more they can isolate the different variables so I can theoretically combine them myself into something that has value to me, because otherwise they have no value at all, as no one will ever test my hardware configuration.

That and this isn't some theoretical test.

If you have test results for a CPU with the GPU limit removed (lowest resolution and settings) and test results for a GPU (with as much as possible, the CPU limit removed) the minimum of those two results WILL be the results you have when you combine them. Money in the bank. It's not just theory, it is solid enough to be considered real data.

The thing is, as a PC hardware enthusiast (and this is a PC hardware enthusiast forum) I am used to having so what unusual and expensive hardware that is far from the mainstream that gets tested.

My options are not to either combine different reviews for my results or to just look at reviews with my exact configuration. It's either combine data from different reviews or have no review to rely on at all, and the latter isn't a very good option.

So, I am all for isolating the GPU in a GPU review and isolating the CPU in a CPU review. That is information I can do something with. Anything else is going to wind up being useless to me. If they want to add it to the review as a nice little extra, that is fine, it doesn't bother me, but without the isolation the review is t worth my time. It literally contains no information I can do anything with.
 
And that is literally the only thing that really matters to me. I don't care about theoreticals or predictions. To nobody's surprise I always have a combination of a GPU and a CPU in my system and not a CPU with a theoretically infinitely fast GPU.

The combinations add up too fast to run all the cpus with all the gpus. AMD puts out maybe 5 processor skus an generation, Intel is more like 10?, and the GPUs have 5 flavors each generation too. There's just not enough money in reviews for anyone to run a couple hours of tests on 150 different configurations. More if you want to know how the new cpu will do with last year's GPU or how the new GPU will do with last year's CPU. More still if you test the GPUs on the CPUs.

Run all the CPUs against a great GPU on easy settings. Run all the GPUs against a great CPU with settings on max. Maybe pick a couple combinations that seem to make sense and confirm they make sense if you've got more time.
 
Taking advantage of "more cores" scenarios can be quite complicated, and maybe "ignored" until high core counts become "the norm" console wise. Then maybe, it becomes a more interesting issue as perhaps games will assume 8 "good" cores or more. Until then, you can save some money for many gamers.
The amount of cores in consoles doesn't really matter---for PC ports.

A. All of the current gen consoles have nearly the same CPU: An 8 Core, Zen 2 CPU with some customizations/enhancements/but also some downgrades (cache sizes). Addtionally, its designed for a much lower power envelope than the desktop Zen 2 CPUs. So, the base and boost clocks are a lot lower.
they are good CPUs and certainly a ton better than what was in the previous generation consoles. But, a desktop 6 core CPU like a 5600x would hand that CPU its ass, with 2 less cores. A 7600x is like another universe of performance.

B. PC ports of games can often stray pretty far from the Console code. Not always. But, it seems to be fairly common for games to be pretty much completely rebuilt/retranslated to a different API or Engine, to achieve whatever development goals. A recent example is God of War, which shows performance trends opposite of what people assume from a console port. The assumption is that ports of console games whould favor AMD GPUs and CPUs. However, God of War runs better on Nvidia and Intel.
Just buy the best stuff you can afford and play games ;)
I'm not saying they shouldn't run CPU tests at low resolution, but I only see that as a curiosity. What I really want to know is when the CPU bottleneck switches to a GPU bottleneck, I don't care what's the max theoretical performance of the CPU is for gaming, as I'll never see that in real world situations. All I need to know if the CPU is fast enough to not hold back the GPU at my preferred resolution.
This sort of stuff is exactly what Hardware Unboxed tells you, with some of their exploratory videos. They are practically the only site, which will test a high end CPU with something like a 6600XT and put that number next to a 6900xt and a 4090. And in doing so, they give you information about GPU/CPU bottlenecks.

They also give you direct comparisons of different generations of the same CPU class. So you can easily see how much an upgrade will actually benefit you. and they also nearly always encourage balancing your system and thinking about cost per frame. and being reasonable about what your framerate and resolution goals are Vs. the hardware you are actually buying.

They also show 1% lows on everything, always. And that helps underline all of this stuff.

They have even explored the CPU overhead of drivers. and have concluded that Nvidia drivers are generally more CPU bound. And have quantified that against all of the info they show in their various testing.

I think that overall, Hardware Unboxed has the best information for gaming performance, of any site/channel.

The one single thing I question about their numbers, is that their Zen 3 and 4 performance seems consistently higher than most other sites (a good example would be 3600k Vs 7600x). With a few games showing runaway performance. Such as Horizon Zero Dawn. I can look at benchmark numbers from other sites for HZD, with relatively the same hardware, and it doesn't always stack up to HUB's numbers. I would really like to know what's going on there. Similarly, the benefit intel receives from DDR5, seems to be relativley higher in HUB's numbers, than other sites.t seemingly few other hardware testers, have not.
 
Last edited:
Well I get it obviously the value in both (the 4090 can have issues at 1080p I think anyways)

Some people can obviously try to have some idea of CPU relative performance more raw, playing at low with low res,which can give some idea for the next GPU upgrade, games not tested and so on.
And everyone would be interested does it matter with the GPU I plan to get or I have right now with the resolution-setting I play in.

Which make both testing relevant, I am not sure it is an either question.
 
So, if a review site tests a CPU and they use a different GPU than you have or different settings than you play at, then that review is totally useless to you.
Yes, basically. If my GPU is not on the list, or at least something I know to be of similar performance, then the test doesn't tell me anything relevant to me. If I'm interested in raw CPU performance passmark on cinebench is a better reference, than how the CPU performs in random games that might favor one or the other brand when running the fastest GPU at the lowest resolution.
It might work for you if you use very common mainstream hardware and settings, but It is highly impractical.
What's impractical is looking at tests ran with configurations I'll never encounter.
I find that it almost never happens that a reviewer reviews my exact hardware configurations and settings. (In fact, I haven't seen that happen even once in at least 12 years)
If it doesn't have my exact CPU or GPU then most of the time it will have something that I know the relative performance of to my HW. The tests that are most useless to me are ones that only feature the latest HW with no point of reference from previous generations.
I have a Threadripper 3960x. No reviewers do GPU reviews on threadrippers.
No, but you have an idea how your CPU fares against more commonly featured gaming oriented CPUs. When I'm looking at CPU tests I'm not looking at how they fare against each other. I'm interested in how much and where I'd gain if I were to upgrade to that CPU.
Before that I had an aging i7-3930k overclocked to 4.8Ghz. it was ultra rare to see an i7-3930k as a review platform even when it was new, and even if it ever happened to they were never tested at my 4.8Ghz
That's the easiest as with classic all core overclocks the performance scales practically linearly, so OC should not be an issue.
Also, when I buy a CPU it is going to last me through several GPU upgrades. When I bought my i7-3930k I had dual Radeon HD 6970's in crossfire. The. i got a single 7970, followed by a single GTX680, followed by the original 6GB Kepler Titan, followed by dual 980TI's in SLI and finally my Pascal Titan X before I finally retired my x79 platform.
It is utterly useless to me to see how a CPU I am considering buying performs with just a single GPU, even if it is the single GPU I have today (which usually isn't the case), because 2-5 more GPU's are probably going in that CPU/motherboard combo before it retires. I want to see how much more margin above my current GPU ut has in it, and determine if it will be the best choice of what's out there for my next 2-5 GPU upgrades. If another CPU had more headroom in it, I want to know that and will likely choose it instead.
Future proofing never worked in PC hardware. It is not worth it to waste money to buy a CPU now based on the chance that you might buy a GPU 2 years later that will benefit from the extra performance. If I need extra CPU headroom in 2025 I'll buy a new cpu then, not now. Every time I tried to future proof it backfired on me and turned out to be wasted money.
I find the value of any review is greater the more they can isolate the different variables so I can theoretically combine them myself into something that has value to me, because otherwise they have no value at all, as no one will ever test my hardware configuration.
The value of a review is in being able to tell how much I'd gain by buying the HW being tested. If a 13900K is let's say 25% faster than a 7800X in a game at 1080p, that is meaningless information to me, as at 4K they'll be indistinguishable, and if I were to base my decision on that test I'd be wasting my money.
That and this isn't some theoretical test.
It's as much a theoretical as the top speed of a Bugatti vs a Koenigsegg. It doesn't matter which one is faster in a 10km closed road in a straight line as I'll never encounter that situation in the real world.
If you have test results for a CPU with the GPU limit removed (lowest resolution and settings) and test results for a GPU (with as much as possible, the CPU limit removed) the minimum of those two results WILL be the results you have when you combine them. Money in the bank. It's not just theory, it is solid enough to be considered real data.
That could be useful if you are looking to upgrade both CPU and the GPU at the same time, which I Have not done sine I've been building my own PCs due to the cost implications. But even then average frame rate is not the full story. Just because a GPU does 200FPS average with the CPU Limit eliminated, and the a CPU also can achieve an average of 200FPS with the GPU Limit eliminated doesn't guearantee that they'll still do 200FPS when paired. As the bottlenecks can and will be different in those separate tests.
The thing is, as a PC hardware enthusiast (and this is a PC hardware enthusiast forum) I am used to having so what unusual and expensive hardware that is far from the mainstream that gets tested.
A Threadripper is a .01% choice even in an enthusiast forum, especially for gaming. You are clearly not choosing your CPU based on gaming performance alone, right?
My options are not to either combine different reviews for my results or to just look at reviews with my exact configuration. It's either combine data from different reviews or have no review to rely on at all, and the latter isn't a very good option.
The article is not about you, you are the exception to the rule.
So, I am all for isolating the GPU in a GPU review and isolating the CPU in a CPU review. That is information I can do something with. Anything else is going to wind up being useless to me. If they want to add it to the review as a nice little extra, that is fine, it doesn't bother me, but without the isolation the review is t worth my time. It literally contains no information I can do anything with.
You might need to look at only relative performance when shopping for a GPU to put in a Threadripper. But you can certainly look at CPUs when tested in configurations you intend to run. It seems to me that you want to base the rule on the one exception.
 
The value of a review is in being able to tell how much I'd gain by buying the HW being tested. If a 13900K is let's say 25% faster than a 7800X in a game at 1080p, that is meaningless information to me, as at 4K they'll be indistinguishable, and if I were to base my decision on that test I'd be wasting my money.
4K being indistinguishable between CPUs isn't true anymore, because we now have GPUs which aren't limited by 4K.
4090, 4080, 7900XTX --- all show gains with better CPUs. A 7600x gets more out of a 4090 at 4K, than a 5600x does, etc. The differences for some games are not trivial, in terms of percentage.

However, we are still often talking about framerates which are plenty high, anyway.
This video is a good, recent example:


*hell yeah for the 12600k hangin in there*

A point HUB made is that you wouldn't/havent' wasted money, by looking at 1080p numbers. Because, 1080p CPU scaling with past high end GPUs, has panned out as relatively the same performance percentage differences, when using those CPUs with newer GPUs. And now that we have GPUs which are not limited at 4K-----CPUs which have done better at 1080p with previous GPUs which could not show a difference at 4K, are going to do better with a 4090 at 4K.
 
Last edited:
Interesting article from TechSpot about CPU testing with high-end GPUs and low resolution.

"The push back to this approach is the alleged claim that it's "unrealistic," and that no one is going to use an RTX 4090 at 1080p, and certainly no one is going to do this with a mid-range to low-end CPU. "

The arguments will be familiar to anyone who's been around here for a while and TechSpot's position is basically "we know what you're saying and you're wrong."

https://www.techspot.com/article/2618-cpu-benchmarks-explained/

They did stuff like retest i7-8700K vs R5 2600X with a 1070, 3060, 3080, and 4090 at ultra high 1080p, taking on arguments that "the 1080 is a bad GPU to pair with a 2600X, so you should test it with a 1070 instead" and explain, with graphs, why that doesn't give a good picture of the relative power of the to CPUs in gaming (the faster the GPU, the more the 8700K pulls ahead; on the 1080, they're 75fps to 73fps, but on a 4090, it's 120 to 99.) Warhammer: Vermintide 2 at 1080p extreme had the same results: the two CPUs have nearly indistinguishable resolts on the 1080, ballooning out to a 30% advantage with a 4090.

"Using the medium quality preset we found back then that the Core i7-8700K was 17% faster on average, which is not miles off what we're seeing here with the RTX 3060, which provided a 23% margin between the two CPUs. The margin was much the same with the GeForce RTX 3080, and up to 29% faster with the RTX 4090.

It's made clear then that the GeForce GTX 1080 Ti tested at 1080p gave us a much better indication (over five years ago) as to how things might look in the future, when compared to a more mid-range product like the GTX 1070."
Seen this? Hmm 🤔 🧐

 
The amount of cores in consoles doesn't really matter---for PC ports.
Perhaps, but I think it does. How much rework do I want to do for the PC? Now, if you feel that PC game sales trump console, then I suppose there might be something there, but my guess is they tune for the lower platforms first. And then assume that life will be somewhat easier on the PC where there is usually more room. I guess the hugely successful titles might have gobs of extra development going on, but the whole industry looks "discount" to me. Maybe they are totally different code bases, each tuned optimally??
 
Perhaps, but I think it does. How much rework do I want to do for the PC? Now, if you feel that PC game sales trump console, then I suppose there might be something there, but my guess is they tune for the lower platforms first. And then assume that life will be somewhat easier on the PC where there is usually more room. I guess the hugely successful titles might have gobs of extra development going on, but the whole industry looks "discount" to me. Maybe they are totally different code bases, each tuned optimally??
I'm not sure I understand your point?

PS5 is an underclocked (3.5ghz max boost), cut-down cache (I don't remember how much. but I think its less than half), 8 core Zen 2 CPUs.

A desktop Zen 2 Ryzen 3600 6 core CPU would likely perform better in most games, except those which do actually scale with more threads (and there aren't many). And in that case, it would probably be a wash. As the 3600's boost clocks are much higher (4.2ghz max). Meaning its multithreading should mostly make up for the 2 missing cores.

*In theory, PS5 games should be developed in such a way, that the game data works well with the smaller cache size. There are always opportunities, when you area dealing with one hardware spec.
 
The problem I have with CPU benchmarks is that they're generally not benching the right games, instead going for ones that are generally more GPU-limited to begin with and already hit well over 120 FPS on just about any modern CPU.

ArmA 3 is probably the most mainstream of the list; I noticed negligible gains going from a 4770K 4.5 GHz to a 7700K 4.9 GHz, but the 12700K bulldozed both of them even at stock clocks - first CPU I ever had to cross the 60 FPS threshold at standard Yet Another ArmA Benchmark settings.

Meanwhile, upgrading from my old GTX 980 to the RX 7900 XTX, then trading the latter for an RTX 4080? Not a damn improvement at all, it's that CPU-limited at 1080p, even with maxed settings.

DCS World is even worse off for CPU utilization, being largely single-threaded with an extra thread for audio, but it actually does see gigantic gains from upgrading the GPU - just not as much as most engines. (They've been teasing this EDGE engine overhaul for what feels like an entire decade already, so I'm skeptical that they'll ever deliver.)

Worst-case scenario would probably be Cortex Command - single-threaded, zero GPU utilization. I'm sure even Noita uses the GPU a little for all the post-processing effects, despite also being a game that slams the CPU with pixel-level physics, particle effects, and even fluid simulations.

Furthermore, they're not always emphasizing the right metric, as 1% lows are what you really want to see improve with a new CPU/platform - something where the 5800X3D clearly dominates in a lot of games (especially the ones I'm generally most interested in), and the 7800X3D will probably extend that lead much further. (But there are other edge cases where those CPUs could lose hard, like RPCS3 where Alder Lake with AVX-512 unofficially enabled dominates all.)

If that wasn't enough, RAM bandwidth and latency also plays a role, as someone benchmarking Star Citizen was quick to show - you could gain as much as 40% with a 13900K at 5.8 GHz P-core on DDR4 by tuning it to 4100 MT/s CL16, showing that even DDR4-3600 CL16 is still leaving performance on the table in certain titles. (DDR5 was not benchmarked, but I'd be curious to see if the 6000 MT/s and up kits fare significantly better on Alder/Raptor Lake compared to DDR4.)

To put it simply, benchmarking CPUs properly is more complicated than just "insert overkill GPU, run at 1080p", especially in the middle of a memory transition where a newer DDR standard brings the usual "higher bandwidth but also higher latencies" trade-off.

There's also another approach I've seen in a few videos that I greatly appreciate, and it's a series of benchmarks of modern GPUs on old CPUs to highlight the performance scaling you'd get. Long story short: anyone on Kaby Lake or older is likely wasting money buying anything better than an RTX 3060 12 GB unless they intend to upgrade the rest of the PC soon afterward. (And even that may be a waste for certain games, as stated above - always build with a given workload/use in mind!)
 
Furthermore, they're not always emphasizing the right metric, as 1% lows are what you really want to see improve with a new CPU/platform - something where the 5800X3D clearly dominates in a lot of games (especially the ones I'm generally most interested in), and the 7800X3D will probably extend that lead much further. (But there are other edge cases where those CPUs could lose hard, like RPCS3 where Alder Lake with AVX-512 unofficially enabled dominates all.)

If that wasn't enough, RAM bandwidth and latency also plays a role, as someone benchmarking Star Citizen was quick to show - you could gain as much as 40% with a 13900K at 5.8 GHz P-core on DDR4 by tuning it to 4100 MT/s CL16, showing that even DDR4-3600 CL16 is still leaving performance on the table in certain titles. (DDR5 was not benchmarked, but I'd be curious to see if the 6000 MT/s and up kits fare significantly better on Alder/Raptor Lake compared to DDR4.)

To put it simply, benchmarking CPUs properly is more complicated than just "insert overkill GPU, run at 1080p", especially in the middle of a memory transition where a newer DDR standard brings the usual "higher bandwidth but also higher latencies" trade-off.

There's also another approach I've seen in a few videos that I greatly appreciate, and it's a series of benchmarks of modern GPUs on old CPUs to highlight the performance scaling you'd get. Long story short: anyone on Kaby Lake or older is likely wasting money buying anything better than an RTX 3060 12 GB unless they intend to upgrade the rest of the PC soon afterward. (And even that may be a waste for certain games, as stated above - always build with a given workload/use in mind!)
Hardware Unboxed tracks the 1% lows in all of their graphs
They showed DDR4 and 5 performance, in all of their Raptor Lake and Zen 4 reviews. And included both for Alder Lake in that data, as well. (They did not show DDR4 4000+ Likely because that is luck of the draw, whether or not your IMC will handle it).
And they have a done a few videos with farily old and really old, CPUs. One of which highlighted that the 10900k's improved gaming performance is less about having 10 cores and more about having more cache----by deactivating cores.
 
Long story short: anyone on Kaby Lake or older is likely wasting money buying anything better than an RTX 3060 12 GB unless they intend to upgrade the rest of the PC soon afterward. (And even that may be a waste for certain games, as stated above - always build with a given workload/use in mind!)
Uh the review showed a clear improvement using a 3080 on an i3 and older ryzen 2600x

A 4770k at 4.7ghz matches closely to 8700k stock, def above a 2600x so a 3080 level card would be the max

Also isn't the 7900xtx faster than 4800 in non-rtx? If you have a 12700K then it's unclear if those gpus can max that cpu out so wouldn't you be gpu limited?
 
Last edited:
4K being indistinguishable between CPUs isn't true anymore, because we now have GPUs which aren't limited by 4K.
4090, 4080, 7900XTX --- all show gains with better CPUs. A 7600x gets more out of a 4090 at 4K, than a 5600x does, etc. The differences for some games are not trivial, in terms of percentage.

However, we are still often talking about framerates which are plenty high, anyway.
This video is a good, recent example:


*hell yeah for the 12600k hangin in there*

A point HUB made is that you wouldn't/havent' wasted money, by looking at 1080p numbers. Because, 1080p CPU scaling with past high end GPUs, has panned out as relatively the same performance percentage differences, when using those CPUs with newer GPUs. And now that we have GPUs which are not limited at 4K-----CPUs which have done better at 1080p with previous GPUs which could not show a difference at 4K, are going to do better with a 4090 at 4K.

All the more reason to look at 4K instead of 1K. As mentioned if I'm interested in CPU performance I look at tests that completely isolate the CPU. If I'm interested in gaming performance I look at realistic scenarios I intend to run, not might be-s and hypotheticals. Shopping for a gaming CPU doesn't mean I need the fastest, or even the best FPS/$ CPU, I just want the one that is fast enough to not hold me back at 4K.

Looking at 1K scores seems like e-peen kind of thing to me. OH LOOK, my cpu could run CS at 5000FPS if I run it at 400x300 resolution on a 4090Ti.

People look down and frown at 3DMark because "it is not real world testing" then they turn around an run real game benchmarks at completely unrealistic settings. LOL.
 
Uh the review showed a clear improvement using a 3080 on an i3 and older ryzen 2600x

A 4770k at 4.7ghz matches closely to 8700k stock, def above a 2600x so a 3080 level card would be the max

Also isn't the 7900xtx faster than 4800 in non-rtx? If you have a 12700K then it's unclear if those gpus can max that cpu out so wouldn't you be gpu limited?
Depends on which review. The video I had in mind was showing off BF2042 as one of the benchmark titles, and that one's pretty brutal on 4C/8T CPUs, particularly 1% lows. Lots of high-player-count multiplayer FPSs tend to be that way, I've noticed. (PlanetSide 2, Star Citizen, etc.)

The way I see it, if there are substantial gains to be made with the same GPU and a much newer CPU, you're either buying too much GPU or making like me and doing gradual rolling upgrades until it's the gaming PC of Theseus - a GPU here, new CPU/mobo there, etc.

Also, the RX 7900 XTX gets slapped silly by the RTX 4080 in VR, it's not even close in DCS or No Man's Sky. The 4080 actually avoids reprojection much of the time with the Valve Index at 90 Hz, the 7900 XTX can't escape it even at 80 Hz with more generous frame time windows.

Trading GPUs was absolutely worth it for me because of how much better NVIDIA is at VR, even before factoring how my 7900 XTX throttled itself to XT performance levels due to the vapor chamber defect. AMD is going to need one hell of a fine wine driver update to make the 7900 XTX worth the money for VR.

With that said, I didn't bench the 4770K, 7700K and 12700K too extensively in DCS yet because I haven't felt like shoving my RTX 4080 into the former two systems at the moment. Couldn't compare CPUs with the GTX 980 because it was definitely the bottleneck in VR, unplayably so.

If I have enough time, I'll at least test the 7700K system with the 4080 to measure performance losses. I don't think it'll fit in the 4770K system's case, though - Zotac mounted a lengthy heatsink on this thing.
 
All the more reason to look at 4K instead of 1K. As mentioned if I'm interested in CPU performance I look at tests that completely isolate the CPU. If I'm interested in gaming performance I look at realistic scenarios I intend to run, not might be-s and hypotheticals. Shopping for a gaming CPU doesn't mean I need the fastest, or even the best FPS/$ CPU, I just want the one that is fast enough to not hold me back at 4K.

Looking at 1K scores seems like e-peen kind of thing to me. OH LOOK, my cpu could run CS at 5000FPS if I run it at 400x300 resolution on a 4090Ti.

People look down and frown at 3DMark because "it is not real world testing" then they turn around an run real game benchmarks at completely unrealistic settings. LOL.
Hardware unboxed very clearly explains it.
Based on their poll data, users generally keep CPUs through 3 video card upgrades.
If they test two CPUs right now at 4K, with the current best GPU, and they score the same, but not the same at 1080p: That means 4K was GPU limited and doesn't tell them which CPU is better.

If they test those same two CPUs with the current best videocard, at 1080P, and one is ~20% faster than another CPU: That ~20% difference will remain, with a future, better GPU. This is how they tell you which CPUs are better in certain games, and overall.
Their case in point, was the 8700k Vs. the Ryzen 2600x. Two CPUs which cost about the same. The 8700k is better in gaming. But, 4K in the original tests, did not show that. Because it was GPU limited. Today, put that 8700k with a 4090 at 4K (which is not completely limited at 4K)----and you see that 20% again.
 
Last edited:
As mentioned if I'm interested in CPU performance I look at tests that completely isolate the CPU

I guess you entirely miss the point of testing games at low resolutions then.

The entire point is to isolate CPU performance.

It is not a "game benchmark". Think of it as a "CPU benchmark" like Cinebench R23 and such.

Because "games" are tested in CPU reviews, people get all up in arms about this stuff.

If you can't extrapolate estimated performance of your configuration from the reviews, then that's on you, others can do it just fine.
 
Hardware unboxed very clearly explains it.
And I very clearly disagree with their opinion. That's all there is to it.
Based on their poll data, users generally keep CPUs through 3 video card upgrades.
That's because there is no need to upgrade it, but the only way to learn whether you need to upgrade your CPU or not is? ...Tests run in the resolution you are gaming at .
If they test two CPUs right now at 4K, with the current best GPU, and they score the same, but not the same at 1080p: That means 4K was GPU limited and doesn't tell them which CPU is better.
That's why we should use CPU tests to determine which CPU is better. Gaming testing is very unreliable at that unless you look at aggregate data from dozens of game testing. One or a few games running at 1080 is not conclusive. And I'm not even saying they should not do it, just that they must keep 4K testing too. You as the consumer should never base your purchase on one data point anyway. IDK how HUB does it, I'm not a fan of them.
If they test those same two CPUs with the current best videocard, at 1080P, and one is ~20% faster than another CPU: That ~20% difference will remain, with a future, better GPU. This is how they tell you which CPUs are better in certain games, and overall.
Their case in point, was the 8700k Vs. the Ryzen 2600x. Two CPUs which cost about the same. The 8700k is better in gaming. But, 4K in the original tests, did not show that. Because it was GPU limited. Today, put that 8700k with a 4090 at 4K (which is not completely limited at 4K)----and you see that 20% again.
You can also see that discrepancy between those two cpus by their single threaded performance. Despite using more and more cores games are still most happy with a high single core clock.
 
I guess you entirely miss the point of testing games at low resolutions then.
I understand why they do it, I'm just not interested in it.
The entire point is to isolate CPU performance.
Yes, in games. Which is irrelevant due to games being highly GPU bound at any reasonable resolution. If everything else is equal, then maybe it can be the final deciding factor. But I'm yet to have that problem that two CPUs are exactly the same price, and exactly the same performance in every other test except for low res gaming benchmarks with a GPU I'll never have.
It is not a "game benchmark". Think of it as a "CPU benchmark" like Cinebench R23 and such.
I don't mind having the test, but I'll always base my decision on cinebench or passmark, and even other cpu tests before looking at these game cpu benchmarks.
Because "games" are tested in CPU reviews, people get all up in arms about this stuff.
A few comments seems like a non-issue to me. They are making it an issue by putting it in the limelight.
If you can't extrapolate estimated performance of your configuration from the reviews, then that's on you, others can do it just fine.
What are you talking about? I know the performance of my configuration, I see it every day, I don't need to extrapolate anything about it. What I need to know is if a new CPU will have any meaningful impact on the performance, and for that I need to look at 4K tests, as I don't game at 1K.

I understand that they do it to show the differences in game performance of the CPUs, and I'm saying I do not care about that unless all else is equal. Same price, same 4K performance, same single and multi thread CPU performance. Then and only then will I look at these gaming tests.
 
Interesting article from TechSpot about CPU testing with high-end GPUs and low resolution.

"The push back to this approach is the alleged claim that it's "unrealistic," and that no one is going to use an RTX 4090 at 1080p, and certainly no one is going to do this with a mid-range to low-end CPU. "

The arguments will be familiar to anyone who's been around here for a while and TechSpot's position is basically "we know what you're saying and you're wrong."

https://www.techspot.com/article/2618-cpu-benchmarks-explained/

They did stuff like retest i7-8700K vs R5 2600X with a 1070, 3060, 3080, and 4090 at ultra high 1080p, taking on arguments that "the 1080 is a bad GPU to pair with a 2600X, so you should test it with a 1070 instead" and explain, with graphs, why that doesn't give a good picture of the relative power of the to CPUs in gaming (the faster the GPU, the more the 8700K pulls ahead; on the 1080, they're 75fps to 73fps, but on a 4090, it's 120 to 99.) Warhammer: Vermintide 2 at 1080p extreme had the same results: the two CPUs have nearly indistinguishable resolts on the 1080, ballooning out to a 30% advantage with a 4090.

"Using the medium quality preset we found back then that the Core i7-8700K was 17% faster on average, which is not miles off what we're seeing here with the RTX 3060, which provided a 23% margin between the two CPUs. The margin was much the same with the GeForce RTX 3080, and up to 29% faster with the RTX 4090.

It's made clear then that the GeForce GTX 1080 Ti tested at 1080p gave us a much better indication (over five years ago) as to how things might look in the future, when compared to a more mid-range product like the GTX 1070."
TL;DR: "When there is zero difference in the real world, we can pick certain unrealistic hardware and settings combinations in order to create one so that we have something to write about."

That's not exactly news.
 
TL;DR: "When there is zero difference in the real world, we can pick certain unrealistic hardware and settings combinations in order to create one so that we have something to write about."

That's not exactly news.

No one is actually going to play at those settings with that hardware, but that is not the point.

It shows you what the CPU is capable of if freed of GPU limitations, and allows you to predict how it is likely to perform with a newer GPU down the road.

CPU's are generally reviewed at launch. If 2 years later I want to upgrade my GPU, and I read GPU reviews and see that in Title X the GPU maxes out at 90fps.

If I have unrestrained CPU data I can then go back to that older review and see if it is capable of above or below 90fps and thus easily determine if the CPU I have is sufficient to support the performance my new proposed upgrade GPU is capable of delivering, or if I will be CPU limited.

If I only have data that is limited to the GPU's that were available at the time, then looking back at those reviews is useless to me, and I will probably not be able to find any newer tests with my exact CPU on the latest and greatest GPU's I am looking to upgrade to.

In other words, unrestrained CPU data is perfect, because it is universal. I can compare it to unrestrained GPU data and it can give me an understanding of how two unrelated components will perform together.

CPU data that is retrained by the GPU's at the time at settings used at the time are going to be useless unless the reader happens to have that exact hardware and is reading the review right when it is written.

Yes, presenting data this way will result in some morons (or shitposters) claiming "OMG my CPU is so much better than yours, it scored 350fps instead of your shitty 325fps" when neither are going to have a GPU that can run the title faster than ~140fps (fictional scenario) but morons will always misinterpret data, and reviews are intended to create value, not just support the CPU/GPU brand war zealots. If Intel fanboys want to trash on AMD over a unimportant 25fps deficit, let them. Who gives a shit.

When you remove the GPU limit by testing the CPU at the lowest possible resolution at the lowest possible quality settinghs you learn something about the CPU, and that's the point of a CPU review. It's up to you to be your own system integrator and evaluate/validate how it all goes together in your application with your chosen hardware combinations.

It is so incredibly limiting and frustrating when a reviewer post CPU data that tells you nothing about the CPU because it is limited by the GPU. "Hey look, both CPU's performed the same, because they are GPU limited and I used the same GPU" tells me absolutely nothing about the CPU under review other than that it met a certain minimum criteria of being able to support the frame rates the reviewers GPU was capable of delivering. That's barely a review at all. That's one guy testing his one system combination. I could get that from a freaking forum post.
 
You and M76 should start a club.

Again, the point is not to only do high resolution benchmarks. But do those as well as low resolution, low setting benchmarks. If I want to upgrade my CPU and mainly have gaming in mind I do want to see what the real world, real resolution and graphic setting differences would be.

Techpower Up does this.
 
You and M76 should start a club.
You know, I'm here, you can address any of my points at any time if you think I'm wrong on anything. It seems you are the one who needs to start a club, judging by how you can't handle dissent.
 
Techpower Up does this.
Techspot does this too. Here's an easily-found example: 18 games, at 1440p and 4K, generally at ultra quality, along with comparisons with a bunch of other cards.

And then they also do the ones like this, which are for an entirely different reason. And yet, every time it comes up, a few people who don't care about those specific tests come out of the woodwork to tell people how those tests are useless. Every site that does low-quality gaming CPU tests has this exact same problem, including [H]ardOCP and The FPS Review.
 
You know, I'm here, you can address any of my points at any time if you think I'm wrong on anything.
This post is not aimed specifically at you. If one don't find a review useful, one could just not comment on it. Just a thought. One don't need to come in and tell everyone how it's useless to them.

The latest M2 Pro thread is getting the same tedious argument from people who don't like Macs, just like happens every single time there's a Mac thread. Nobody changes their mind, but pro-Mac people and Mac detractors waste everyone's time with the endless, identical arguments. It's BORING.

Just let someone be wrong on the Internet without comment every once in a while, because it makes threads like this (or every Mac thread) a chore.
 
Techspot does this too. Here's an easily-found example: 18 games, at 1440p and 4K, generally at ultra quality, along with comparisons with a bunch of other cards.

And then they also do the ones like this, which are for an entirely different reason. And yet, every time it comes up, a few people who don't care about those specific tests come out of the woodwork to tell people how those tests are useless. Every site that does low-quality gaming CPU tests has this exact same problem, including [H]ardOCP and The FPS Review.
That example is a review for a GPU, not a CPU. Of course they would have 1440p and 4K benchmarks.

How are we in 2023 on [H]ardOCP and this is still being debated?

giphy.gif
 
How are we in 2023 on [H]ardOCP and this is still being debated?
The same way Mac haters can't stop commenting on Mac threads and arguing with everyone else.

I get it. I don't like Apple products myself, and I would probably never buy one without getting a completely different job, because the software I use at work doesn't even have a Mac version. So I generally[1], get this, stay out of them instead of showing up at every darn one and arguing with everyone else.

[1] Podody's nerfect.
 
It is so incredibly limiting and frustrating when a reviewer post CPU data that tells you nothing about the CPU because it is limited by the GPU. "Hey look, both CPU's performed the same, because they are GPU limited and I used the same GPU" tells me absolutely nothing about the CPU under review other than that it met a certain minimum criteria of being able to support the frame rates the reviewers GPU was capable of delivering. That's barely a review at all. That's one guy testing his one system combination. I could get that from a freaking forum post.
The same could be said for top GPU low resolution testing. It tells us nothing about what, if any, is the real world difference for the vast majority of gamers. You really do need both sets of tests. Most gamers would benefit from knowing at what CPU tier they could stop wasting money.
 
In other words, unrestrained CPU data is perfect, because it is universal. I can compare it to unrestrained GPU data and it can give me an understanding of how two unrelated components will perform together.
As I said before it's not that simple. A CPU achieving 70FPS with a 4090 in 1080p in 2023, doesn't mean it will also achieve 70FPS in 4K with a 5060 years down the line, even if a 5060 is capable of 70FPS average in 4K with the fastest available CPU at the time. You just can't comibine results like that. Yes low res game tests give a clearer image of the relative performance of CPUs in the test, but so does any CPU benchmarking tool which provides more universally applicable data. As games are all over the place when it comes to CPU optimization.
CPU data that is retrained by the GPU's at the time at settings used at the time are going to be useless unless the reader happens to have that exact hardware and is reading the review right when it is written.
It is useless if you want the exact raw differences between the CPUs in the test. That is why we have CPU only benchmarks. But it is most useful for seeing how much performance is to be gained in a real world situation by upgrading to those CPUs , and not in a vacuum or hypothetical scenario, but right then and there, numbers that you can take to the bank.
Yes, presenting data this way will result in some morons (or shitposters) claiming "OMG my CPU is so much better than yours, it scored 350fps instead of your shitty 325fps" when neither are going to have a GPU that can run the title faster than ~140fps (fictional scenario) but morons will always misinterpret data, and reviews are intended to create value, not just support the CPU/GPU brand war zealots. If Intel fanboys want to trash on AMD over a unimportant 25fps deficit, let them. Who gives a shit.
That is why it was pointless to make this entire kerfuffle. This is more about trying to get attention, which they seem to have succeeded in, than about addressing a legit widespread issue.
When you remove the GPU limit by testing the CPU at the lowest possible resolution at the lowest possible quality settinghs you learn something about the CPU, and that's the point of a CPU review. It's up to you to be your own system integrator and evaluate/validate how it all goes together in your application with your chosen hardware combinations.
Yes it is, always was. Just as it falls to you to know how to interpret any data
It is so incredibly limiting and frustrating when a reviewer post CPU data that tells you nothing about the CPU because it is limited by the GPU. "Hey look, both CPU's performed the same, because they are GPU limited and I used the same GPU" tells me absolutely nothing about the CPU under review other than that it met a certain minimum criteria of being able to support the frame rates the reviewers GPU was capable of delivering. That's barely a review at all. That's one guy testing his one system combination. I could get that from a freaking forum post.
The only way to learn which is the minimum you can spend and not get CPU limited is by looking at these tests. Benchmarking games at low res is just a weird and unnecessarily convoluted alternative to pure CPU benchmarks. I don't mind if they do it, more data is always better, but it's just not near the top of my list of things to look at.
 
I think my major gripe with cpu/gpu reviews these days is some of the games they test and how they test them. Some games, you really need to test a game in certain environments or parts of the game. A lot of in game benchmarks won't show the worst parts of the game. Of course, there is only so much time in the day and reviewers can't benchmark every game fairly. So most of the time they pick the in game benchmark or one section of a game save that can be repeatable across test platforms. It just means that I ignore a lot of their results.

The most interest debate lately to me has been over ddr4 vs 5 and some recommending ddr5 instead of top end ddr4 that is cheaper.
 
Back
Top