Intel Core i9-14900K Review - Reaching for the Performance Crown

I had to turn my AC on just to read the review.

And i´m on the 13700k.
Lol unless your running an all core full load doing work it's just a gross exaggeration of real world use especially gaming but hardware unboxed loves to use an all core full load benchmark or workload to say it's "hot" when realistically not what the majority of users would experience.
 
I see what you're saying. But at the end of the day it's a refresh of an refresh.:confused:🤷‍♂️
Yeah but supposedly a 5-10% internal yield improvement. They get to charge the same while getting better binning and better yields so profits on them go up.
 
Lol unless your running an all core full load doing work it's just a gross exaggeration of real world use especially gaming but hardware unboxed loves to use an all core full load benchmark or workload to say it's "hot" when realistically not what the majority of users would experience.
Granted, idle to low usage consumption is much more real workload nowadays.
But i like my CPU, bellow the Babel cooling tower of the NH-D15 with 2 ippc3000 that try to control it.
 
Imagine making a 24 core cpu that loses to an 8 core. Omegalul
Which 8 core CPU would that be ? It is quite above a 7700x, 11900k is that last intel 8 cores, if we are talking some game I am pretty sure the 7800x3d will beat the 24 core threadrippers in all of them, it is not like we should expect adding core to help for 99% of them.

relative-performance-cpu.png
 
Nice try lol
If we can easily imagine a 16 core losing to a 8 core by a good amount in games:

relative-performance-games-1920-1080.png


Why going to 24 significantly change that ?

Obviously 24 cores are not for playing game, looking at anything above a 14600k for game has in good part nothing to do with the core counts (a bit like the 7700x >= 7950x will tend to show).
 
If we can easily imagine a 16 core losing to a 8 core by a good amount in games:

View attachment 608139

Why going to 24 significantly change that ?

Obviously 24 cores are not for playing game, looking at anything above a 14600k for game has in good part nothing to do with the core counts (a bit like the 7700x >= 7950x will tend to show).
I mean that also shows AMD’s 16 core CPUs loosing to 8.
If gaming is all you want then the 7800x3D is the current leader by a mile.
 
Yes thus the easily imagine a 16 core losing to a 8 core comments.

That poster cannot be serious.
Games don’t scale automatically, thread allocation is determined during the initial engine configuration, going beyond 6 or 8 has very few real world benefits there for existing engines.

There are too many dependencies to leave it to the OS or some other thread director to balance so until those improve significantly core allocation is still determined by the developers.
 
Are you guys blowing the power consumption to be a bigger deal that it actually is?

What is the power per core comparison? For example how many watt per Intel P core? And how many watt per AMD big core?

Comparing an 8 core CPU to a 24 core CPU isn't an apples to apples comparison.

What are the per P/Big core numbers?

Also what if you turned off all the Ecores on Intel and just compared it straight P core to straight Big cores? Because all the E cores on Intel translate to faster performance on Intel for non gaming workloads. While the p cores trade blows with AMD big cores.
So it's odd to just throw around random comparisons.
 
Are you guys blowing the power consumption to be a bigger deal that it actually is?

What is the power per core comparison? For example how many watt per Intel P core? And how many watt per AMD big core?

Comparing an 8 core CPU to a 24 core CPU isn't an apples to apples comparison.

What are the per P/Big core numbers?

Also what if you turned off all the Ecores on Intel and just compared it straight P core to straight Big cores? Because all the E cores on Intel translate to faster performance on Intel for non gaming workloads. While the p cores trade blows with AMD big cores.
So it's odd to just throw around random comparisons.
Why is it odd if we're comparing workloads to workloads?
 
Comparing apples to oranges because they are both fruits.

See how silly that sounds?
Cpu's are not fruits. You do not eat and taste a cpu, but you use it to compute stuff. Results of said computations will be identical regardless of the underlying architecture, so the performance per core is not a relevant metric for the end user.

Performance per watt and power consumption for the total package, on the other hand, are relevant because users will have to account for this by paying the electricity bill and handling the cooling solution.
 
Last edited:
It's all about the performance level that the user looking for.
If you want max FPS in games with 4090 (for lower is pointless to talk), max overclocked Intel i9 13-14 gen (with 8000-8400MT/s RAM) will be over max overclocked 7000x3D (with 6400MT/s RAM).
For stock performance and we just put them into the case, 7000x3D is the better choice.

Everything is a personal choice of the user does he want to pay more money for a little more FPS?

7800x3D is a nice CPU and it would be my personal choice too, but I can't say that is the best CPU.
And the fun fact about watts and that it will add more to the bill... if someone has the money to buy 14900k+4090 + all the water cooling that's needed, I don't think that he will think about a few $$ for electricity...
 
Cpu's are not fruits. You do not eat and taste a cpu, but you use it to compute stuff. Results of said computations will be identical regardless of the underlying architecture, so the performance per core is not a relevant metric for the end user.

Performance per watt and power consumption for the total package, on the other hand, are relevant because users will have to account for this by paying the electricity bill and handling the cooling solution.
Everything you just said here is completely wrong. It's actually astonishing anyone on this forum could be this wrong about the subject that it's blowing my mind.
 
I'm not loyal to either brand, but thermals would concern me at least as much as power consumption with Intel right now. But if you want to put in the extra effort of delidding/liquid metal and/or watercooling, along with taking the time to tweak all the BIOS settings for maximum efficiency for your use case, then more power to you -- that's the only way I'd go with Intel if I was building right now. But that's just me, and I'm sure someone could still have a positive experience with Intel without doing all those things.

My preference would probably be AMD if I was building a new gaming machine right now. I just think the 3D cache tech is interesting, and it's nice to not have to worry about thermals, power consumption, or BIOS tweaks as much out of the box. It's just less hassle and less expensive and you have those crazy cases where the 3d cache gives you really huge leaps of performance ahead of Intel's offerings.

I am looking forward to seeing how the next generation AMD and Intel CPUs stack up against each other. That will probably be upgrade time for me.
 
Everything you just said here is completely wrong. It's actually astonishing anyone on this forum could be this wrong about the subject that it's blowing my mind.
The comparison here is not cores count but FPS numbers that you can get from top CPUs, so it's fair enough.
 
Results of said computations will be identical regardless of the underlying architecture, so the performance per core is not a relevant metric for the end user.
These aren't basic math computations that are being made, and with AI coming to the forefront of compute I cannot state how inaccurate your statement is.
It isn't 1960 and these CPUs are doing more than just general accounting.

Performance per watt and power consumption for the total package, on the other hand, are relevant because users will have to account for this by paying the electricity bill and handling the cooling solution.
Performance per core is still just as relevant as the above, especially clock-for-clock compared to the previous generation for comparison.
Did the lack of performance shifts going from the 6th-to-7th gens and 10th-to-11th gens teach you nothing?
 
These aren't basic math computations that are being made, and with AI coming to the forefront of compute I cannot state how inaccurate your statement is.
It isn't 1960 and these CPUs are doing more than just general accounting.
So you're suggesting that running the same computation on either AMD or Intel will result in different results? emphy is right here, and I have no idea what you are suggesting. What does AI have to do with anything?

Performance per core is still just as relevant as the above, especially clock-for-clock compared to the previous generation for comparison.
Did the lack of performance shifts going from the 6th-to-7th gens and 10th-to-11th gens teach you nothing?
Performance per core isn't really relevant. AMD is using all "normal" cores, Intel is using a mix of P and E cores. You can't do 1:1 comparisons. Who knows, maybe one day desktop CPUs will look like more like ARM CPUs and have fast and big "prime" cores, performance cores and efficient cores? Or be designed even more different than that?

It doesn't matter how the CPU is architected, the actual performance and power draw across the workloads you run is what does matter.
 
Performance per watt and power consumption for the total package, on the other hand, are relevant because users will have to account for this by paying the electricity bill and handling the cooling solution.
People often get hung up on performance per watt metrics but its largely irrelevant for desktop systems. Performance per watt came to the forefront of CPU reviews when Intel started pushing that narrative. They started doing that because Intel has always cared about its server market far more than it has the desktop market. In a data center, performance per watt matters when electric bills are in the multi-thousand dollar a month range. Getting more performance per watt and more performance per core in a given amount of rack space is compelling in such cases. Not only does electric bills come into play for data center operations but rent / floor space and how much computing power they can cram into that space all becomes relevant.

Performance per watt was also pushed by Intel when the market shifted from desktops to laptops and other mobile devices. At some point Intel quit selling us desktop processors and started selling us CPU's that were largely repurposed mobile CPU's for the desktop and server parts for HEDT. In the mobile space, performance-per watt matters as it impacts battery life and performance of the system while using the battery. However, on the desktop, performance per watt really isn't relevant as power consumption differences between one CPU or another really isn't going to impact your electric bill or your rent the same way it does at larger scales like those found in datacenters. Let's also be clear, the performance per watt narrative was highlighted at a time when Intel was only making iterative performance updates each generation and needed a positive way to spin their products.

This really happened in a post-Sandy Bridge world where Intel was only getting a performance uplift of 2-3% per generation and clock speeds on its CPU's were falling each generation. You would see things like the replacement for the Intel Core i7 2700K (the Intel Core i7 3770K) have the exact same clocks as its predecessor but less overclocking headroom. We typically lost about 100MHz or so each generation. So naturally Intel had to showcase performance-per watt as a way to explain that while the new CPU's didn't clock as high, you weren't losing performance and that you were gaining efficiency. Prior to those CPU's no one gave a damn or even really talked about performance-per watt metrics. The only time power consumption was ever really brought up was to compare CPU's from Intel and AMD and talk about which one was the power hungry pig that ran hotter than the other. But it was a discussion primarily about TDP and heat dissipation not performance-per watt or efficiency which again, is immaterial to desktop applications.

I've never seen a significant difference in my power bill simply by running a CPU that was more power hungry than some other CPU. Even running a 10980XE overclocked to 4.8GHz didn't impact my power bill in an appreciable way and that bastard pulled far more power than any of Intel's LGA 1700 CPU's ever could. I've run dual CPU systems for years and never had an issue with that either. I've had plenty of AMD Ryzen's and again it was never an issue. How often you use your microwave, run your air conditioning, do laundry or a million other things tends to impact your power bill much more. Running additional computers certainly can impact the bill but again, the difference between a Ryzen 9 7950X3D and a Core i9 14900K will be negligible at best.

Lastly, I want to address the topic of cooling since you brought it up. While Intel's CPU's do currently consume more power and generate more heat than AMD's, its important to realize that from the user's perspective there is no difference. When you can use the same cooling solutions on both CPU's, its almost irrelevant. The same AIO's that cool your AMD CPU's cools Intel's CPU's. Yeah, you might see higher temperatures on one CPU versus another but what does that mean for the user? Not much as it turns out. These CPU's are basically designed to hit their thermal limits and throttle under load. Better cooling solutions may allow you slightly higher clock speeds and being able to sustain those clocks for longer periods of time but it won't impact your performance that much unless you are using something totally inadequate for cooling. The impact to your room temperature is also negligible compared to the heat typically dumped by your GPU anyway.

In other words, the same solutions you would use for high end AMD CPU's are the same ones you would use for a high end Intel CPU and vice versa. Threadripper not withstanding, this translated to Intel's HEDT CPU's as well meaning I used the same waterblock on the 10980XE as I did on a 10900K or even an 11900K. Using that same waterblock on the 12900K was fine and a purpose built LGA 1700 waterblock only shaved one or two degress off the CPU temps. You can't get away with smaller, cheaper air cooling on hgih end AMD CPU's anymore than you can on Intel CPU's.

Again, if you are using the same cooling solutions for AMD and Intel CPU's, then what does the cooling have to do with anything?
And the fun fact about watts and that it will add more to the bill... if someone has the money to buy 14900k+4090 + all the water cooling that's needed, I don't think that he will think about a few $$ for electricity...
Precisely. I've never once thought about my electric bill when buying hardware. The fact of the matter is if you are gaming, a high end GPU is the bigger hit to power consumption anyway. To further that example, I've never heard anyone state that the reason they opted not to go with a higher end CPU had anything to do with power consumption. It usually boils down to what their computing needs are and if they'll benefit from extra CPU cores and how much they are willing to spend on the CPU itself. It's not because CPU A is going to raise their electric bill significantly over CPU B.

I am literally saying all this as someone who has done CPU reviews and sees power consumption data on systems literally every week reviewing hardware. Switching this stuff out doesn't significantly impact my electric bill and the data, while interesting doesn't really mean much in this application.
 
Last edited:
Lastly, I want to address the topic of cooling since you brought it up. While Intel's CPU's do currently consume more power and generate more heat than AMD's, its important to realize that from the user's perspective there is no difference. When you can use the same cooling solutions on both CPU's, its almost irrelevant. The same AIO's that cool your AMD CPU's cools Intel's CPU's. Yeah, you might see higher temperatures on one CPU versus another but what does that mean for the user? Not much as it turns out. These CPU's are basically designed to hit their thermal limits and throttle under load. Better cooling solutions may allow you slightly higher clock speeds and being able to sustain those clocks for longer periods of time but it won't impact your performance that much unless you are using something totally inadequate for cooling. The impact to your room temperature is also negligible compared to the heat typically dumped by your GPU anyway.

I don't get this perspective at all. Physics is physics. More watts requires more cooling capacity. Heat doesn't just disappear, it gets transferred. If every AMD part is using <=1/4th-1/3rd of the wattage of the equivalent intel part for equivalent workloads, you're going to need a smaller cooler for it. Furthermore, how about VRMs? Most AMD CPUs can currently run on basically garbage motherboards because they're so efficient that the VRMs can be cut down. I doubt one could say the same for the 14900k (or perhaps even the 14700k). Furthermore, less heat means quieter cooler, means less airflow required, means quieter case, means less heat inside said case. I could see you actually trying to argue this if the difference wasn't as huge as it is between AMD and Intel. But it is. It's not a small gulf. A 7800X3D consumes <=50-60W on average while gaming, and stays at ~60C with my Noctua air cooler (NH-U14S, which is a single tower) on average, while giving me top tier framerates, if not the highest tier framerates. The equivalent Intel part, while gaming... what is that even going to be?

Electricity I won't comment on, that's highly region dependent. If you're in Europe and you game a lot, I think the difference can be quite significant, though.

Efficiency is efficiency. If it's a different of 1.2x or something, that would be one thing. But this gulf? No, sorry I can't see anyone arguing that it just doesn't matter. It does, it just depends on if it matters to you.
 
Last edited:
I don't get this perspective at all. Physics is physics. More watts requires more cooling capacity. Heat doesn't just disappear, it gets transferred. If every AMD part is using <=1/4th-1/3rd of the wattage of the equivalent intel part for equivalent workloads, you're going to need a smaller cooler for it. Furthermore, how about VRMs? Most AMD CPUs can currently run on basically garbage motherboards because they're so efficient that the VRMs can be cut down. I doubt one could say the same for the 14900k (or perhaps even the 14700k). Furthermore, less heat means quieter cooler, means less airflow required, means quieter case, means less heat inside said case. I could see you actually trying to argue this if the difference wasn't as huge as it is between AMD and Intel. But it is. It's not a small gulf. A 7800X3D consumes <=50-60W on average while gaming, and stays at ~60C with my Noctua air cooler (NH-U14S, which is a single tower) on average, while giving me top tier framerates, if not the highest tier framerates. The equivalent Intel part, while gaming... what is that even going to be?

Electricity I won't comment on, that's highly region dependent. If you're in Europe and you game a lot, I think the difference can be quite significant, though.

Efficiency is efficiency. If it's a different of 1.2x or something, that would be one thing. But this gulf? No, sorry I can't see anyone arguing that it just doesn't matter. It does, it just depends on if it matters to you.
I am not saying that there isn't a technical difference between AMD and Intel when it comes to power consumption and heat output. Obviously there is. What I'm saying is that on a high end build, it doesn't really matter in a practical sense. Where the rubber meets the road, you use pretty much the same coolers on both to get the job done. Sure, you can get away with less on the AMD side, but that's not optimal. You could very well be leaving some performance on the table by using something like the Wraith Spire. How much depends on what you are doing with it. Yes, this is a larger problem on the Intel side but it isn't as if you are going to see a lot of high end AMD CPU's being cooled with a Wraith Spire. Most people will at least get a decent high end air cooler or a decent AIO for something like that. Again, these are solutions you could use on Intel CPU's just as easily.

As for the motherboards, you are missing a key point about VRM's. VRM's have been getting more and more powerful over the last several generations. What we call a mid-tier board or even a budget board has what would have been high end VRM's a few years ago. I'm seeing 50A and 60A power stages on pretty cheap boards. Mid-range stuff has power stages upwards of 90A now while the high end stuff is all 105A. That's why you can get away with running modern CPU's on cheaper motherboards. AMD's CPU's still pull a lot of power. You have to look at things through a larger lens than just comparing AMD's current products to Intel's latest. Yes AMD is more efficient than Intel but AMD's current CPU's pull a decent amount of power when pushed. You don't need 18+1+1 phase configuration to run a modern CPU. Not even Intel's.

Motherboard VRM's are usually so overbuilt these days that you can run garbage boards on both AMD and Intel CPU's. A board with more powerful (and more) MOSFETs will run more efficiently under heavier load but I've run these CPU's on budget tier B760 boards without issues. I've also run 12 and 16 core CPU's on the dreaded MSI X570-A that the Youtubers all trashed for its poor VRM's. Sure, they get hot running CPU's like that beyond stock speeds but it was doable. Both CPU's chuck their TDP's out the window pretty fast. You can consume more than 150w on AMD's CPU's under load and while Intel has increased its PLL to 253w, these CPU's do not consume this much on average. Granted, Intel CPU's may pull upwards of a third more power in similar cases, this is well within the dissipation capabilities of the coolers you would use on either CPU. Again, in practical terms it doesn't really matter.

I can't speak to pricing in the EU, but in the U.S. it doesn't matter. Your electric bill isn't going to be all that different if you choose AMD instead of Intel or vice versa assuming all other factors are equal.
 
I am not saying that there isn't a technical difference between AMD and Intel when it comes to power consumption and heat output. Obviously there is. What I'm saying is that on a high end build, it doesn't really matter in a practical sense. Where the rubber meets the road, you use pretty much the same coolers on both to get the job done. Sure, you can get away with less on the AMD side, but that's not optimal. You could very well be leaving some performance on the table by using something like the Wraith Spire. How much depends on what you are doing with it. Yes, this is a larger problem on the Intel side but it isn't as if you are going to see a lot of high end AMD CPU's being cooled with a Wraith Spire. Most people will at least get a decent high end air cooler or a decent AIO for something like that. Again, these are solutions you could use on Intel CPU's just as easily.

I still disagree with you on this part. You're saying it's "suboptimal" on the AMD side, but.. no, it's not. You can get away with less on AMD side without losing any performance. Because, again, it's just physics. Less wattage means less cooling capacity required. Depending on the rest of your case's setup, your other temperatures will also benefit from it. I would like to see actual holistic full-system temperatures with AMD vs Intel, especially, say 7800X3D or even 7950X3D with a 4090 vs a 14900K with a 4090. I can almost guarantee you that in order to accommodate the 14900k at full throttle, you're going to require some compromise somewhere, unless you're using an open bench. Your build will somehow be restricted by it. Having a chip that can be easily cooled off of a single fan 120mm radiator (or just any air cooler), with a max tdp of like 90W, is quite a difference from one that requires a 240-360mm AIO (or high end Noctua tower) as a baseline, and can go over 3x that. Not just from a holistic system perspective, but from the perspective of how much money you actually need to use to make the upgrade.

Now, granted, the Thermalright Peerless Assassin is a great option available to both that only runs ~$35, but either way it's going to only dump like 60W of heat into the case with one CPU, while with the other it's dumping... what, 3x that, at the minimum? I'm not sure how you can say that doesn't matter (supposing it's even sufficient for a 13900K).

As for the motherboards, you are missing a key point about VRM's. VRM's have been getting more and more powerful over the last several generations. What we call a mid-tier board or even a budget board has what would have been high end VRM's a few years ago. I'm seeing 50A and 60A power stages on pretty cheap boards. Mid-range stuff has power stages upwards of 90A now while the high end stuff is all 105A. That's why you can get away with running modern CPU's on cheaper motherboards. AMD's CPU's still pull a lot of power. You have to look at things through a larger lens than just comparing AMD's current products to Intel's latest. Yes AMD is more efficient than Intel but AMD's current CPU's pull a decent amount of power when pushed. You don't need 18+1+1 phase configuration to run a modern CPU. Not even Intel's.

Motherboard VRM's are usually so overbuilt these days that you can run garbage boards on both AMD and Intel CPU's. A board with more powerful (and more) MOSFETs will run more efficiently under heavier load but I've run these CPU's on budget tier B760 boards without issues. I've also run 12 and 16 core CPU's on the dreaded MSI X570-A that the Youtubers all trashed for its poor VRM's. Sure, they get hot running CPU's like that beyond stock speeds but it was doable. Both CPU's chuck their TDP's out the window pretty fast. You can consume more than 150w on AMD's CPU's under load and while Intel has increased its PLL to 253w, these CPU's do not consume this much on average. Granted, Intel CPU's may pull upwards of a third more power in similar cases, this is well within the dissipation capabilities of the coolers you would use on either CPU. Again, in practical terms it doesn't really matter.

If we're talking about the X3D chips, the 7800X3D uses about 90W at absolute maximum. Maybe 100W on the highest stress test possible. Intel is quite a ways away from that. I'm certainly no expert on the power delivery portion. I'm assuming that hotter VRMs will degrade faster, but maybe still at an reasonably slow rate.

But putting that side, can you actually prove that a 13900K can run on a very budget motherboard? Do you have any reviews that back this up? Buildzoid has said himself that even the cheapest B650M motherboard out there will trivially run a 7950X with no performance loss. (And also the last gen is pretty efficient, too, so the X570 example is kinda.. whatever). Perhaps even overclock it. https://www.asrock.com/mb/AMD/B650M-HDVM.2/index.asp This is a $125 board new. Can Intel actually prove that its highest end CPUs can run off of the equivalent on Intel side with absolutely no performance loss? If you can show empirical evidence of this, I can let it go, but I'm skeptical. To be totally fair, I haven't watched Buildzoid's roundups of the Intel side at all, so you could be right.
 
Last edited:
Anything with a B760M chipset will handle a 13900k with no issues related to gaming workloads. And those can be found in the $150’s.
 
Anything with a B760M chipset will handle a 13900k with no issues related to gaming workloads. And those can be found in the $150’s.

Sure but we're not talking just gaming workloads, though I'm focusing on gaming. AFAIK, any of AMD's chips will work even when stress testing on that $125 B650M mobo that I linked. The X3D line is a given, they have ridiculously low TDP.
 
Sure but we're not talking just gaming workloads, though I'm focusing on gaming. AFAIK, any of AMD's chips will work even when stress testing on that $125 B650M mobo that I linked. The X3D line is a given, they have ridiculously low TDP.
It’s that cache it makes up for the bottlenecks that 2 memory channels imposes on gaming workloads. But neither AMD nor Intel can stretch out to 4 on consumer devices because that eats into the workstation market and they can’t have that.
More cache fewer fetches, Intel needs to get their version of it out but it’s too slow a process for mass deployment. Intel could do it with HBM3 as that has been proven to work, but it would cut their production speed down to levels where it becomes cost prohibitive and that’s where their new packaging hardware comes in but that’s still 12 months out is we are being optimistic.
 
I still disagree with you on this part. You're saying it's "suboptimal" on the AMD side, but.. no, it's not. You can get away with less on AMD side without losing any performance. Because, again, it's just physics. Less wattage means less cooling capacity required.
I didn't disagree with this on a technical level. I disagree on a practical level. And yes, a Wraith Spire over a 360AIO or custom water is sub-optimal. Sure, it works but it doesn't work as well. Back in the days of fixed CPU clocks you'd be right. However, in today's systems keeping your CPU cooler allows it to achieve higher boost clocks and maintain them longer. Granted, cooling isn't the only condition that governs this but the point stands.
Depending on the rest of your case's setup, your other temperatures will also benefit from it. I would like to see actual holistic full-system temperatures with AMD vs Intel, especially, say 7800X3D or even 7950X3D with a 4090 vs a 14900K with a 4090. I can almost guarantee you that in order to accommodate the 14900k at full throttle, you're going to require some compromise somewhere, unless you're using an open bench. Your build will somehow be restricted by it.
I don't know where you are getting this from. How would you be restricted by it? How would you have to compromise? I've built lots of systems with these CPU's and I've never found this to be the case. What you are talking about is theoretical, not practical. It's damn sure not the way it really works. You can run RAM at the same speeds, you can still use the same SSD's without throttling, you can still get the same performance out of the GPU's between both platforms. So how are you going to be restricted by a 14900K over a 7950X3D?
Having a chip that can be easily cooled off of a single fan 120mm radiator (or just any air cooler), with a max tdp of like 90W, is quite a difference from one that requires a 240-360mm AIO (or high end Noctua tower) as a baseline, and can go over 3x that. Not just from a holistic system perspective, but from the perspective of how much money you actually need to use to make the upgrade.
Again, the high end AMD CPU's do not have a TDP of 90w and even if AMD reported that, it would be bullshit. Those CPU's have pulled upwards of 160w or so under testing. I suspect you are talking about the 7800X3D, but that's not AMD's only CPU and gaming workloads are only part of the equation. Some people use their machines for more than gaming.
Now, granted, the Thermalright Peerless Assassin is a great option available to both that only runs ~$35, but either way it's going to only dump like 60W of heat into the case with one CPU, while with the other it's dumping... what, 3x that, at the minimum? I'm not sure how you can say that doesn't matter (supposing it's even sufficient for a 13900K).
I'm not sure where you are getting this stuff from. If your case has proper airflow you aren't dumping all that into the case. Most of it should get vented out. Sure, your internal system temperature might be slightly higher but it doesn't matter in a practical sense. It's not limiting your GPU or your RAM in some way. Again, I have no idea where you are getting this stuff from.
If we're talking about the X3D chips, the 7800X3D uses about 90W at absolute maximum. Maybe 100W on the highest stress test possible.
Well, I'm comparing chips in closer price points. The 14900K and 13900K should primarily be compared to the 7900X, 7950X and their X3D counterparts. PC Perspective showed the 7950X3D pulling 150w under testing.
Intel is quite a ways away from that. I'm certainly no expert on the power delivery portion. I'm assuming that hotter VRMs will degrade faster, but maybe still at an reasonably slow rate.
In theory, cheaper motherboards with lesser VRM's will degrade faster but still last sufficiently long enough to make it through their expected service life with a comfortable margin beyond that. Of course high end boards with even more vastly overbuilt VRM's will not run as hot as they spread the workload out and run at a range that's far more efficient. That's why the dreaded MSI X570-A was classified as a non-starter for overclocked 12 core CPU's and a no go for 16 core CPU's. It still works, but who knows how long the board would last doing it. I've run 12th and 13th generation CPU's on B760 boards. It's fine.
But putting that side, can you actually prove that a 13900K can run on a very budget motherboard?
No, I can't say that with 100% certainty. I haven't tested or evaluated every budget Intel board in existence but yes, you can run those CPU's on some pretty cheap motherboards. I've personally run 13700K's and 12900K's on some pretty cheap boards like the B760 Steel Legend from ASRock which I reviewed here. Tom's Hardware reviewed a B750 motherboard with a 13900K here.
Do you have any reviews that back this up?
Yes, linked above.
Buildzoid has said himself that even the cheapest B650M motherboard out there will trivially run a 7950X with no performance loss. (And also the last gen is pretty efficient, too, so the X570 example is kinda.. whatever). Perhaps even overclock it. https://www.asrock.com/mb/AMD/B650M-HDVM.2/index.asp This is a $125 board new. Can Intel actually prove that its highest end CPUs can run off of the equivalent on Intel side with absolutely no performance loss? If you can show empirical evidence of this, I can let it go, but I'm skeptical. To be totally fair, I haven't watched Buildzoid's roundups of the Intel side at all, so you could be right.
I'm pretty confident you can run a 13900K or 14900K on most boards without issue or performance loss. I've never seen any motherboard that couldn't do it. Though that's not to say that such boards don't exist.
 
I didn't disagree with this on a technical level. I disagree on a practical level. And yes, a Wraith Spire over a 360AIO or custom water is sub-optimal. Sure, it works but it doesn't work as well. Back in the days of fixed CPU clocks you'd be right. However, in today's systems keeping your CPU cooler allows it to achieve higher boost clocks and maintain them longer. Granted, cooling isn't the only condition that governs this but the point stands.

I'm using a single tower Noctua cooler. I'm not encountering any throttling on my 7800X3D at all, as far as I'm aware. During gaming tests, a 7950X3D pulls around 70W. Buildzoid did a test with a Wraith cooler on a 7800X3D on an open bench and also encountered no throttling. I'm not aware of what a 7950X3D would need during a synthetic test, but it's surely much less than 13900/14900.
I don't know where you are getting this from. How would you be restricted by it? How would you have to compromise? I've built lots of systems with these CPU's and I've never found this to be the case. What you are talking about is theoretical, not practical. It's damn sure not the way it really works. You can run RAM at the same speeds, you can still use the same SSD's without throttling, you can still get the same performance out of the GPU's between both platforms. So how are you going to be restricted by a 14900K over a 7950X3D?

Because you need a large radiator somewhere in your case to take care of its maximum heat? The 7950X3D at max TDP is 150W. The 7800X3D is 90W. At gaming they're both half (give or take). The 13900KS iirc is >=150W gaming and 250W++ maximum otherwise. Either X3D chip puts out basically equivalent or better gaming performance than the 13900ks or 14900k in most games while using <=1/2 or 1/3 TDP of the 13900ks or 14900k. It limits your build because:
1. You need a larger capacity cooler for it.
2. You need somewhere to put that cooler.
3. You need to be able to vent said cooler with the fan/airflow capacity of the case, in such a way that it does not interfere with GPU airflow (because GPU boosts can be temperature sensitive, too).
If you want to use some cases to build, this isn't going to be possible. You seem to be used to testing on open bench platforms? Personally, being able to take the huge 360mm radiator out of my case in favor of this air cooler for the 7800X3D was a pretty liberating experience, and my GPU temperatures did definitely scoot down a bit because I wasn't forced to take a slot for it. This let me move the Suprim X Liquid's exhaust to a more favorable position. You say that the vast majority of build heat is GPU. Okay, fine, but if your CPU is putting out 1/3rd vs <=1/6th of the heat of your GPU while gaming, then do the math. Yes, those extra watts aren't going straight to your GPU temp, but heat is heat. It's going somewhere and it has to get dissipated or exhausted somewhere. It's not really anything that theoretical, it's just thermodynamic principles. How much those extra degrees do or do not matter to you, it's up to you.

Well, I'm comparing chips in closer price points. The 14900K and 13900K should primarily be compared to the 7900X, 7950X and their X3D counterparts. PC Perspective showed the 7950X3D pulling 150w under testing.

Yep, that afaik that's just absolute stress testing. Gaming is much lower than that, and basically half of a 13900k. Again, a 7800X3D in most games performs pretty close to a 7950X3D anyway, to the point where I don't think most people care.

No, I can't say that with 100% certainty. I haven't tested or evaluated every budget Intel board in existence but yes, you can run those CPU's on some pretty cheap motherboards. I've personally run 13700K's and 12900K's on some pretty cheap boards like the B760 Steel Legend from ASRock which I reviewed here. Tom's Hardware reviewed a B750 motherboard with a 13900K here.

That Tom's Hardware one does seem to have some differences between the boards? Anyway, it's fine, I'll drop the VRM portion of this argument, then. Although I'll note that this Steel Legend is $160 whereas the one I linked was $125, but eh whatever, who cares...
 
Last edited:
I'm pretty confident you can run a 13900K or 14900K on most boards without issue or performance loss. I've never seen any motherboard that couldn't do it. Though that's not to say that such boards don't exist.
Both AMD and Intel get a little squirrely when dealing with boards running the B620M or H610 chipsets respectively, some of the really cheap ones there max out at 65W power delivery, whereas others will give AMD the full 175w or Intel the full 150W and others still may or may not have access to the higher power steppings, which the 7950x can top out at 290W and the 13900k 253W, though the situations that create those draws don't exist for long outside of test benches.

But top to bottom there are boards for both that are capable of full chip performance for as low as 85$ USD, I mean I would not use an $85 board with a $600 CPU as that in my head is just asking for a bad time, but the options exist.
 
I have to agree with Dan on this one. On a practical level it's not that big of a deal. Besides the 7800X3D is faster in collective gaming benchmarks, that's cool. Intel is much faster at everything else a PC can do and uses more power to be faster. The AMD is slower and uses less power, it's fine. There is a trade off. Sure. It's just not that big of a deal. Do I tune my system for less heat and stable performance? Yes. But I don't get bent out of shape and jump ship to 1 AMD chip that has a small advantage in gaming lol. That's silly. The difference is negligible because at that point the 4090 is what you should be using if your hell bent on best performance and best GPU and best CPU kicking and screaming elite style but the 4090 puts the majority of all of us around the same performance level when the display maxes out at 144hz then it is really splitting hairs for argument sake. Very few people have a 240hz monitor and if they do it would be 1440p where there is even less of a difference in performance being more GPU bound.
 
Both AMD and Intel get a little squirrely when dealing with boards running the B620M or H610 chipsets respectively, some of the really cheap ones there max out at 65W power delivery, whereas others will give AMD the full 175w or Intel the full 150W and others still may or may not have access to the higher power steppings, which the 7950x can top out at 290W and the 13900k 253W, though the situations that create those draws don't exist for long outside of test benches.

But top to bottom there are boards for both that are capable of full chip performance for as low as 85$ USD, I mean I would not use an $85 board with a $600 CPU as that in my head is just asking for a bad time, but the options exist.
I've never tested on less than B650 or B760 chipset based boards. Using a $600+ CPU on a H610 chipset seems unlikely. Though your point is taken. Such boards probably wouldn't handle those CPU's all that well.
I'm using a single tower Noctua cooler. I'm not encountering any throttling on my 7800X3D at all, as far as I'm aware. During gaming tests, a 7950X3D pulls around 70W. Buildzoid did a test with a Wraith cooler on a 7800X3D on an open bench and also encountered no throttling. I'm not aware of what a 7950X3D would need during a synthetic test, but it's surely much less than 13900/14900.
Again, that's a 7800X3D and gaming isn't the only application people use their systems for. You can load up cores doing other things besides benchmarking. If you really push an AMD CPU and you are using PBO, etc. you are going past that TDP. Period. Yes, Intel CPU's pull more power. No one is saying otherwise.
Because you need a large radiator somewhere in your case to take care of its maximum heat? The 7950X3D at max TDP is 150W. The 7800X3D is 90W. At gaming they're both half (give or take). The 13900KS iirc is >=150W gaming and 250W++ maximum otherwise. Either X3D chip puts out basically equivalent or better gaming performance than the 13900ks or 14900k in most games while using <=1/2 or 1/3 TDP of the 13900ks or 14900k. It limits your build because:
1. You need a larger capacity cooler for it.
Technically, I suppose. However, on the high end people who are interested in the top offerings from either camp are likely to buy the best air coolers, better AIO's or go to watercooling. They aren't likely to stick with a Wraith Spire or put the cheapest cooler possible on a 14900K. Coolers which can likely handle either AMD's 7950X or Intel's Core i9 13900KS/14900KS, etc.
2. You need somewhere to put that cooler.
Outside of SFF this is rarely an issue.
3. You need to be able to vent said cooler with the fan/airflow capacity of the case, in such a way that it does not interfere with GPU airflow (because GPU boosts can be temperature sensitive, too).
If you want to use some cases to build, this isn't going to be possible.
You are splitting hairs. A good quality case with a good airflow design can handle Intel and AMD systems just fine.
You seem to be used to testing on open bench platforms?
While my board reviews are conducted this way, I still have personal systems in full cases. I've also built a number of AMD and Intel systems with mid-range to high end CPU's and GPU's in cases. I've probably built hundreds of machines professionally and personally over the last 28 or 29 years. This also includes multi-processor workstations and servers.
Personally, being able to take the huge 360mm radiator out of my case in favor of this air cooler for the 7800X3D was a pretty liberating experience, and my GPU temperatures did definitely scoot down a bit because I wasn't forced to take a slot for it. This let me move the Suprim X Liquid's exhaust to a more favorable position. You say that the vast majority of build heat is GPU. Okay, fine, but if your CPU is putting out 1/3rd vs <=1/6th of the heat of your GPU while gaming, then do the math. Yes, those extra watts aren't going straight to your GPU temp, but heat is heat. It's going somewhere and it has to get dissipated or exhausted somewhere. It's not really anything that theoretical, it's just thermodynamic principles. How much those extra degrees do or do not matter to you, it's up to you.
Again, I've never seen the heat difference by going to Intel CPU's negatively impact GPU clocks or memory clocks in a significant or measurable way. Going from a 10980XE to a 12900K and keeping the same GPU didn't mean my GPU suddenly got faster or had better clocks as a result. The 10980XE package power was around 500w if I recall correctly when overclocked to 4.8 or 4.9GHz. I've also run multi-CPU and GPU systems in cases. Again, what you are saying is conceptually true. (Theoretical wasn't the right word.) Not practically true. It's not that you are wrong, the heat is there but it isn't enough in a well built system to create the limitations you are talking about.
Yep, that afaik that's just absolute stress testing. Gaming is much lower than that, and basically half of a 13900k. Again, a 7800X3D in most games performs pretty close to a 7950X3D anyway, to the point where I don't think most people care.
Again, gaming isn't the only consideration.
That Tom's Hardware one does seem to have some differences between the boards? Anyway, it's fine, I'll drop the VRM portion of this argument, then. Although I'll note that this Steel Legend is $160 whereas the one I linked was $125, but eh whatever, who cares...
I've never tested boards that cheap. Being a reviewer for an enthusiast site, there isn't much appetite for H610 boards and things like that. That being said, Intel boards have historically been slightly more expensive than their AMD counterparts anyway.
 
Hm? I thought most of this discussion was regarding gaming. Anyway, we're going to have to just agree to disagree I think. I've got things to do, like bed lol. At least it's been a civil discussion, which is rare on here. Props to that.
 
The 7800X3D is 90W. At gaming they're both half (give or take). The 13900KS iirc is >=150W gaming and 250W++ maximum otherwise.
The 7800x3D full draw goes up to 160w plus, but that’s still a lot less than the 13900k at 250 plus no not half but still significantly less.

Gaming though your unlikely to see numbers that high on either.
 
Back
Top