Upgrading - i9-12900 vs amd 5950x - and what motherboard to get

Joined
Dec 4, 2021
Messages
10
Since I'm trying to decide whether to go with AMD 5950 or Intel 12900, and what motherboard to use, I put this here. If it's the wrong place let me know so I can move it to where it SHOULD be.

Its time to upgrade the 6-year-old system. This is not going to be a gaming system, though there may occasionally be a bit of gaming. I'm looking to optimize speed and efficiency across the board when doing large-ish image processing - things like a 10-12 D850 image panos, multiple layers, Lightroom, faster processing in tools like Denoise AI and AI Sharpen. And, of course faster processing across the board in Lightroom, Photoshop and everything else. Video is a possibility so I'll want to be able to handle that well too.

I went over to Puget Systems, where they test the kind of things I do, and they gave the i7-12900 rave reviews, showing it consistently being faster than the AMD 5950X. At the same time, they didn’t mention ANYTHING about heat issues, so when I came over here I was rather surprised by statements like “You can’t cool the 12900 without water cooling…

I'll be using a full tower case because I need the front slots for removable hard drive carriers, want lots of space for the ATX board and cooler and I'd like as many front side USB3 ports as possible. I'd STRONGLY prefer NOT to have to water cool, and I've got a Noctua NH-D15 that is usable on the AMD 5950x with adapter, and hopefully the 12900 (I'm checking on the necessary adapter). My current system has been overclocked for the last 6 years, so I'm sure the new one will also be.

I'm going to put in 64GB of DDR5 memory if I get the 12900, or 64GB of DDR4 with the AMD, and an NVMe, 1TB SSD for the O/S and general stuff. I've got an EVGA 850W PS. GPU is an AMD 5700XT, which should be fine 'til the graphics card stupidity abates - if it ever does.

SO, given what I primarily do, looking at cost (semi-important), performance (very important), overclocking (important), reliability (important), etc - am I better with the Intel i9-12900KF or the AMD Ryzen 5950X? And why?

Motherboards for the Intel CPU seem to fall into 2 camps - expensive, like the $600 ASUS ROG Hero that are full of tweaks for gaming, and less expensive but I suspect not as capable $300 motherboards. Are the things that make gaming motherboards successful ALSO things that benefit me, or is most of what makes the board good for gaming completely wasted on what I’m doing? Is there a motherboard that would be optimal for this type of application that's extremely capable, has easy, great overclocking, AND is cost effective or if I end up with the i9-12900 do I just shut up and buy the ASUS hero and be done with it?

OR, is the 5950X going to provide effectively the same real-world performance in a more “mature” package? Is the heat problem talked about with the 12900 NOT an issue with the 5950? Will my NH-D15 happily keep this thing cool with a moderate/reasonable overclock and a bunch of memory and cpu hog applications running? What are the great options for motherboards for my type of setup? Since the 5950X is limited to DDR4 memory it looks like the fastest is 3600 MHz rather than 4800, but is this going to make any real difference? It looks like the 5950X would be slightly cheaper overall to build, but not enough to worry about. So, given my environment, what are some great motherboards for this chip?

The budget isn't unlimited, but neither is it so constrained that I want to end up with a disappointing system... Thoughts?
 
One thing to consider right now, ddr5 is impossible to find. It looks as tho its going to remain this way for the foreseeable future as well. So if you might consider the 12900kf, a ddr4 mb is probably your best option.
I'm unable to help you more than that little bit unfortunately.
Best of luck!
 
The thing is, the "gaming" branding on motherboards is all for marketing. Nearly all desktops these days are built for gaming and the research done by Intel, AMD, ASUS, MSI, GIGABYTE and others shows this. They've known this for more than 10 years now. The rest are built for productivity or niche applications. The fact is, motherboards do not have optimizations for gaming. It's not really a thing. That said, the gaming boards do have a gamer focused aesthetic, including RGB lights and all of that. There is the idea that the overclocking capability of so called gaming motherboards is for gamers but the vast majority of people do not overclock. While it had become more mainstream, modern CPU's rarely benefit from overclocking at all. Most of the time, the biggest benefit we see to overclocking is in multi-threaded applications like those you would be using more so than games.

There are motherboards that have a focus towards content creators and are marketed as such. But when you compare similarly priced gaming and content creator motherboards together, they are virtually identical outside of their aesthetic qualities. Indeed, GIGABYTE's X399 Designaire EX and X399 Aorus Gaming 9 were identical. I know this from having reviewed both boards and compared them side by side. The latter had extra LED's and a Killer NIC instead of an Intel NIC, but used identical PCB's and overall designs. The VRM's were the same and the only difference in the UEFI was the color scheme.

So don't be fooled by the idea that gaming motherboards and content creation motherboards are different, because they really aren't. There are some prosumer type motherboards like the ASUS ProArt series and WS series boards which sometimes utilize workstation CPU support etc. and those are a different animal. What you really should look at are features and what the platform has to offer. What are you doing with the system? How much connectivity or I/O do you need?

On the subject of Intel vs. AMD, Intel will be the better option in the long run, especially with DDR5. It really depends on how long you intend to keep the machine and whether or not you would likely upgrade it along the way. AMD's X570 is a dated platform now, but it was a solid one to begin with and a good foundation for a workstation/professional type build. However, Threadripper is better suited to these tasks in a lot of respects, but the price goes up considerably to step into the HEDT world. On the Intel side, the 12900K is impressive and it will only get better as software developers learn to leverage the E cores. The platform is also from Intel which usually means that its basically mature enough out of the gate. I wouldn't really worry about platform maturity with Intel in most cases. Intel does its platforms better out of the gate than anyone else ever has.

Intel's platform just launched. Intel isn't one for creating long lived upgrade paths for CPU's, but the DDR5 boards will get better as DDR5 gets faster and its latencies improve. While DDR5 is only faster in a limited number of areas today, this is likely to change over time. The modules available a year from now will be better than what's out there today and cheaper. While I often argue that future proofing is something of a fool's errand, you have a better chance on that front going Intel whether you go with DDR4 and Z690 or DDR5. AMD's platform on the other hand is older and the last upgrade it will get are 3D cache versions of existing Ryzen 5000 series CPU's. This will primarily impact gaming more than anything. Today, Z690 outlcasses AMD's X570 in just about every way you can measure it.

DDR5 will potentially benefit non-gaming workloads today, (as hinted at by the Gamer's Nexus video on the topc) but as others have said DDR5 availability is shit right now and no one really knows for how long. Supposedly, later in December there will be some relief on this front but that remains to be seen. DDR5 availability will be worse and cost will be higher than DDR4 for quite some time. That's about all that can be said for sure. That doesn't mean it will be unobtainium in a few weeks but it is basically impossible to get at the time of this writing. I know, I've been trying to get ahold of some for my own build to no avail. And fuck paying scalper prices which are actually coming down. I've actively seen auctions relisted with lower by it now prices. They aren't in the realm of sanity yet, but they are already abating nicely, which means that we are likely not going to be in the same situation as we were with Ryzen 5000 series CPU's and GPU's a year ago.

Furthermore, Intel's Z690 platform has more PCIe lanes than X570 and the CPU's have achieved functional parity in this regard. Intel has more bandwidth with PCIe 5.0, which we won't see any advantage for today, but will in the future. This will hit storage devices first, and storage I/O often impacts things like video editing and content creation and development applications or DB stuff more than it does games. If the goal is longevity, I'd say wait for DDR5 to become available, if you can't wait go with an Intel DDR4 based motherboard.

On the subject of DDR4 vs. DDR5 motherboards, the DDR5 boards are your more premium solution. They will potentially have nicer NICs, audio and other integrated features as well as potentially offering better M.2 configurations and so on. You will get better, cooler running VRM's on a lot of DDR5 boards vs. DDR4 ones. However, GIGABYTE is offering its more robust VRM's lower on the product stack this time so that's worth looking into. You still won't get the same feature set on the boards going from DDR5 to DDR4, but you may be able to get what you need.

TLDR:
Intel Z690/DDR5 if you plan on keeping this machine a long time with upgrades along the way.
AMD or Intel Z690/DDR4 if you need it now and are less likely to upgrade along the way.
 
The thing is, the "gaming" branding on motherboards is all for marketing. Nearly all desktops these days are built for gaming and the research done by Intel, AMD, ASUS, MSI, GIGABYTE and others shows this. They've known this for more than 10 years now. The rest are built for productivity or niche applications. The fact is, motherboards do not have optimizations for gaming. It's not really a thing. That said, the gaming boards do have a gamer focused aesthetic, including RGB lights and all of that. There is the idea that the overclocking capability of so called gaming motherboards is for gamers but the vast majority of people do not overclock. While it had become more mainstream, modern CPU's rarely benefit from overclocking at all. Most of the time, the biggest benefit we see to overclocking is in multi-threaded applications like those you would be using more so than games.

There are motherboards that have a focus towards content creators and are marketed as such. But when you compare similarly priced gaming and content creator motherboards together, they are virtually identical outside of their aesthetic qualities. Indeed, GIGABYTE's X399 Designaire EX and X399 Aorus Gaming 9 were identical. I know this from having reviewed both boards and compared them side by side. The latter had extra LED's and a Killer NIC instead of an Intel NIC, but used identical PCB's and overall designs. The VRM's were the same and the only difference in the UEFI was the color scheme.

So don't be fooled by the idea that gaming motherboards and content creation motherboards are different, because they really aren't. There are some prosumer type motherboards like the ASUS ProArt series and WS series boards which sometimes utilize workstation CPU support etc. and those are a different animal. What you really should look at are features and what the platform has to offer. What are you doing with the system? How much connectivity or I/O do you need?

On the subject of Intel vs. AMD, Intel will be the better option in the long run, especially with DDR5. It really depends on how long you intend to keep the machine and whether or not you would likely upgrade it along the way. AMD's X570 is a dated platform now, but it was a solid one to begin with and a good foundation for a workstation/professional type build. However, Threadripper is better suited to these tasks in a lot of respects, but the price goes up considerably to step into the HEDT world. On the Intel side, the 12900K is impressive and it will only get better as software developers learn to leverage the E cores. The platform is also from Intel which usually means that its basically mature enough out of the gate. I wouldn't really worry about platform maturity with Intel in most cases. Intel does its platforms better out of the gate than anyone else ever has.

Intel's platform just launched. Intel isn't one for creating long lived upgrade paths for CPU's, but the DDR5 boards will get better as DDR5 gets faster and its latencies improve. While DDR5 is only faster in a limited number of areas today, this is likely to change over time. The modules available a year from now will be better than what's out there today and cheaper. While I often argue that future proofing is something of a fool's errand, you have a better chance on that front going Intel whether you go with DDR4 and Z690 or DDR5. AMD's platform on the other hand is older and the last upgrade it will get are 3D cache versions of existing Ryzen 5000 series CPU's. This will primarily impact gaming more than anything. Today, Z690 outlcasses AMD's X570 in just about every way you can measure it.

DDR5 will potentially benefit non-gaming workloads today, (as hinted at by the Gamer's Nexus video on the topc) but as others have said DDR5 availability is shit right now and no one really knows for how long. Supposedly, later in December there will be some relief on this front but that remains to be seen. DDR5 availability will be worse and cost will be higher than DDR4 for quite some time. That's about all that can be said for sure. That doesn't mean it will be unobtainium in a few weeks but it is basically impossible to get at the time of this writing. I know, I've been trying to get ahold of some for my own build to no avail. And fuck paying scalper prices which are actually coming down. I've actively seen auctions relisted with lower by it now prices. They aren't in the realm of sanity yet, but they are already abating nicely, which means that we are likely not going to be in the same situation as we were with Ryzen 5000 series CPU's and GPU's a year ago.

Furthermore, Intel's Z690 platform has more PCIe lanes than X570 and the CPU's have achieved functional parity in this regard. Intel has more bandwidth with PCIe 5.0, which we won't see any advantage for today, but will in the future. This will hit storage devices first, and storage I/O often impacts things like video editing and content creation and development applications or DB stuff more than it does games. If the goal is longevity, I'd say wait for DDR5 to become available, if you can't wait go with an Intel DDR4 based motherboard.

On the subject of DDR4 vs. DDR5 motherboards, the DDR5 boards are your more premium solution. They will potentially have nicer NICs, audio and other integrated features as well as potentially offering better M.2 configurations and so on. You will get better, cooler running VRM's on a lot of DDR5 boards vs. DDR4 ones. However, GIGABYTE is offering its more robust VRM's lower on the product stack this time so that's worth looking into. You still won't get the same feature set on the boards going from DDR5 to DDR4, but you may be able to get what you need.

TLDR:
Intel Z690/DDR5 if you plan on keeping this machine a long time with upgrades along the way.
AMD or Intel Z690/DDR4 if you need it now and are less likely to upgrade along the way.
Thanks for the replies. Yeah, DDR5 is almost impossible to find, and what you do find is being scalped - I saw one for 16GB that was $2700! FORTUNATELY, I don't have to do anything right away. My existing system is still good, but at 7 years old, it's getting pretty far behind the current world.

It sounds like the 12900 would overall be the best choice, and once DDR5 memory is available (presuming it doesn't take a year) it sounds like the "best" way to go. It also sounds, from my research, like I can use my NH-D15 air cooler, and if things get too hot down the road I can switch to water cooling - turns out there are some that aren't the huge mess they used to be. I'd definitely not want to lock myself into a system with a motherboard limited to DDR4 memory and have to change it... At the same time, the 5950X is a great performer and it uses cheaper DDR4 memory that can be (apparently) easily overclocked into the mid-4000 MHz range. And, of course there are several really good motherboards.

It looks like there are a couple well-received motherboards for the 12900, so that's not a big issue.

I've been doing research for a few hours, and once I get past the glitz, carnival lights, "mystic music", and all the rest of the fluff, it seems like underneath the same things I need - a ton of CPU horsepower, a huge amount (64GB) of fast memory and a LOT of high-speed storage are (with the exception of the "LOT" of storage) the same things most users need. Big difference is I don't need as much GPU as the serious gamers.

So, thanks for clarifying some of the questions I had - it sounds like EITHER will work, but if you're lazy, like I am, the Intel is more future-proof. I'll have to wait and see what happens with memory 'cause everything else is available. Now I've got to go find an adult, full-tower case! No blinking, flashing, changing colors, dancing, singing, or lighting up like a carnival ride... Just sit there and do WORK!
 
I have been able to get the 5900X to do around 4.5GHz-4.6GHz all core and maintain stability on an AIO. Now, even among applications like Photoshop and Premiere, you do not always see a benefit to an all core manual overclock and you often hurt performance in other applications by doing that. Really, I'd turn on PBO or PBO2 and let it do its thing. That seems to provide the best all around performance, regardless of what the workload is.
 
I am a fan of the Fractal cases if you want something clean and nice looking with good quality.
 
Regarding the CPUs themselves:

This channel does a whole lot of CPU tests for video/image editing, video timeline performance, etc.

https://www.youtube.com/c/TechNoticeOfficial/videos



They also have a couple of videos helping to sort of debunk the notion that 12 series Intel is crazy on power and heat. They are only crazy if.....you are doing sustained, all core, 100% loads. There aren't a lot of real world cases which do that. Yes, things look crazy with benchmark stress tests.


Aside from all that: 12 series has a healthy lead in most real world apps, except encryption/decryption. But certain workflows with 4K+ video do see benefits with the 16 big cores of the 5950x. As well as certain rendering workflows.

-----------------------------------
Regarding motherboards:

At this point, most AMD boards are behind on features. If you aren't exactly picky about motherboards, you are going to get relatively more features from an average Intel board Vs an average AMD board.

However, if you are picky about having certain I/O, there have been a couple of recently released creator boards for AMD, which bring some modern stuff. Such as Thunderbolt, display over thunderbolt, 60 watt power delivery (maybe more?) over display port, etc. It sounds like you may want thunderbolt, if you are going to be dealing with a lot of video and images.

The problem with Intel right now is.....I don't think any DDR4 12-series boards offer thunderbolt, at this time.

*however, you could always buy a thunderbolt add-in card, whichever way you go (AMD or Intel).

And if storage is important to you, Intel's PCI-E 5.0 support might be intriguing to you.

However, if you don't need all of the features of thunderbolt, you might be fine with the 20GBps USBC which is nearly standard on Intel boards. While about half the speed of thunderbolt, that's still very fast. And twice the speed of the USB-C on AMD boards. Just be sure to check that your devices support it. Some devices will drop down to 10GBps, instead.
 
Last edited:
The problem is the heat on these CPU's. I just built a 12900k system and couldn't tame it with a 280mm AIO everthing stock latest BIOS..etc, had to use a voltage offset to tame the beast. That being said, it's a helluva CPU and has crushed everything i've thrown at it. No doubt AMD has the efficiency game won at this point and I can't wait to see what they bring on 5nm. Right now i'm in a holding pattern until after the holidays..the orginal plan was to sell the 5950x system to offset the cost of this, but i'm hesitant at this point..maybe just stick with AMD for now.
 
The problem is the heat on these CPU's. I just built a 12900k system and couldn't tame it with a 280mm AIO everthing stock latest BIOS..etc, had to use a voltage offset to tame the beast. That being said, it's a helluva CPU and has crushed everything i've thrown at it. No doubt AMD has the efficiency game won at this point and I can't wait to see what they bring on 5nm. Right now i'm in a holding pattern until after the holidays..the orginal plan was to sell the 5950x system to offset the cost of this, but i'm hesitant at this point..maybe just stick with AMD for now.
"couldn't tame it" doing what?
 
Coudn't even tame it gaming...high 80s into the 90s..yes I know that's not TJMAX but I don't like to run my CPUs that hot period. Cinebench was hitting 100...and throttling. Had to use Voltage offset to get it to reasonble temps. 280mm AIO and that's after messing with Fan curve..going max on fans..etc. There's a reason why they sent test kits with 360mm AIOs...
 
Coudn't even tame it gaming...high 80s into the 90s..yes I know that's not TJMAX but I don't like to run my CPUs that hot period. Cinebench was hitting 100...and throttling. Had to use Voltage offset to get it to reasonble temps. 280mm AIO and that's after messing with Fan curve..going max on fans..etc. There's a reason why they sent test kits with 360mm AIOs...

Interesting. My 5950x on a 360mm AIO would hit mid 50s, low 60s under a load. The 12900k I'm on right now under the same load is low 60s to high 70s with the same 360mm AIO. I would think a 280mm AIO should be fine honestly.
 
Coudn't even tame it gaming...high 80s into the 90s..yes I know that's not TJMAX but I don't like to run my CPUs that hot period. Cinebench was hitting 100...and throttling. Had to use Voltage offset to get it to reasonble temps. 280mm AIO and that's after messing with Fan curve..going max on fans..etc. There's a reason why they sent test kits with 360mm AIOs...
That doesn't sound right. A noctua NH-U14S can keep a 12900k in the low 90's during Blender and Cinibench (albeit with the fans maxed). A 280mm AIO should be doing a fair bit better.

Gaming temps and power usage for Alderlake have been shown to be very good.

Maybe your cooler wasn't making good contact. I would try re-installing it. And does your cooler have mounting hardware for LGA 1700?
 
Been there done that. Been doing this a long time. LGA 1700 bracket supported..etc. Its certainly possible that the block cold plate is not making great contact but the indents, etc look fine. I will be trying another 280mm AIO different brand soon...just haven't had time.

I'm also running this on a Gigabyte z690 ITX board with latest BIOS, DDR4 3600....Coolermaster NR200P Max with fans exahust and intake at bottom and this was without the side panel on.

I'm going to move my 5950x system into the NR200P Max and probably get a Lian Li 011 Dynamic Mini with a 360mm AIO and transplant the 12900k system there as well as replace the motherboard.
 
Last edited:
Been there done that. Been doing this a long time. LGA 1700 bracket supported..etc. Its certainly possible that the block cold plate is not making great contact but the indents, etc look fine. I will be trying another 280mm AIO different brand soon...just haven't had time.

I'm also running this on a Gigabyte z690 ITX board with latest BIOS, DDR4 3600....Coolermaster NR200P Max with fans exahust and intake at bottom and this was without the side panel on.

I'm going to move my 5950x system into the NR200P Max and probably get a Lian Li 011 Dynamic Mini with a 360mm AIO and transplant the 12900k system there as well as replace the motherboard.
why replace the mobo?
 
I'm going full ATX for the o11 mini. Rather not stick with ITX if i'm not really going ITX.
 
Thanks for the replies. I've just started looking at motherboards, started with Gigabyte. I'm down to Ultra, Pro and Elite AX, and I can't find anything to explain the pricing... The only thing I've found is a half dozen extra ports on the back panel for $50 or $90 depending on which board above the Elite AX. That one also has an HDMI - how do I feed a monitor that's HDMI in if there's only 1 display port on the motherboard and no HDMI?

So which is it as far as 12900 and heat? Some are saying it's not a problem and the an NH-D14 will keep it cool (I HAVE an NH-D15), and others are saying they can't cool it with a 280mm AIO (I've gotta go look that up to see what it means)... I don't want to build this thing if it's going to make the man cave so hot I can't work in it, so is the 12900 just a power-guzzling furnace, or is it actually usable for reasonably heavy loads (not gaming level, image processing level) with air cooling? Or do I need to budget the money for buying water cooling and the time to learn HOW to water cool the thing? Or just bag it and drop back to the AMD 5900/5950?
 
Would be interested to see how well this is working on 12900K. I've used process lasso for a couple years now and absolutely love it's ability to quite simply micromanage cpu resources and a multitude of other useful duties.
 
Would be interested to see how well this is working on 12900K. I've used process lasso for a couple years now and absolutely love it's ability to quite simply micromanage cpu resources and a multitude of other useful duties.
It works well. Der8auer put out a video on that.
 
I am waiting for DDR5 to mature and the new AMD CPUs to drop myself before upgrading my X99 rig.
1. No such thing as a gaming MB marketing hype here, buy whatever board meets your requirements not some Amazon/paid reviewers needs. ( Dan's posts/reviews are excellent place to start). All venders have gems and duds within their product line. MTBF data would be nice if vendors would post it in the public domain.
2. I plan not to be beta tester for Intel's P and E cores. Same applies to AMD's 3d vertical tech supposed to be on next gen CPU. Be patient I had to do the same with finding a 3080 GPU at MSRP.
3. Unless you need the DDR5 advantage for very specific software applications wait until latency and price improve before early adapting. Are most vendors abandoning quad channel RAM boards with DDR5? Plus ECC would be nice.
4. Thermals vary on case and other factors, both will need a beefy cooler or AIO/custom loop. Plan custom loop for my next rig, for comparison my NZXT AIO hits 70 C under load. 5820K CPU stable with 1 GHz OC.
 
I stand corrected on my thoughts in the efficiency of 5950x vs 12900k. Hopefully i'll be able to do more testing with proper cooling after the holidays.
 
I stand corrected on my thoughts in the efficiency of 5950x vs 12900k. Hopefully i'll be able to do more testing with proper cooling after the holidays.

Puget Bench Premiere Pro extended benchmark also had good/interesting results:


Its only during absolute full, sustained, all core loads, that Alderlake gets crazy.
 
Puget Bench Premiere Pro extended benchmark also had good/interesting results:


Its only during absolute full, sustained, all core loads, that Alderlake gets crazy.


That guy is a buffoon. He is arguing that your system is idle most of the time so Intel is more power efficient. If your system is going to be idle most of the time, why are you buying a high end CPU? The maximum power draw is significantly higher for Intel.

I've used both systems, and for my usage, the Intel definitely is hotter and draws more power. It's not necessarily a problem, but unless you're doing what the Puget Benchmark emulates (Adobe I guess?) it's not necessarily as useful as it seems.
 
  • Like
Reactions: noko
like this
That guy is a buffoon. He is arguing that your system is idle most of the time so Intel is more power efficient. If your system is going to be idle most of the time, why are you buying a high end CPU? The maximum power draw is significantly higher for Intel.

I've used both systems, and for my usage, the Intel definitely is hotter and draws more power. It's not necessarily a problem, but unless you're doing what the Puget Benchmark emulates (Adobe I guess?) it's not necessarily as useful as it seems.
Its a creator focused channel and his test is a way to express that while editing stuff, Alderlake is doing great on efficiency. And it is very useful, IMO. Most sites/channels are mainly testing power usage with gaming and cinibench.

I think the point being that Alderlake isn't going to be crazy hot or inefficient for a creator, during much of the creation process.

It would be cool to see power usage comparison for exporting a video project. Power usage while editing in a video timeline, should be very competitive. But I would like to see if exporting a video project puts a completely full load for max heat output on the CPU, like Cinibench does.

And even if it does fully heat up----the efficiency everywhere else, and quicksync support, could make it overall worth it for a creator to go with Alderlake.
 
Last edited:
Its a creator focused channel and his test is a way to express that while editing stuff, Alderlake is doing great on efficiency.

It would be cool to see power usage comparison for exporting a video project. Power usage while editing in a video timeline, should be very competitive. But I would like to see if exporting a video project puts a completely full load for max heat output on the CPU, like Cinibench does.

Adobe has traditionally been very Intel biased IIRC. Like I said, the 12900k is definitely hotter and less efficient for my use cases than my previous 5950x. It's better than previous Intel offerings though.

Don't get me wrong, it's a good step by Intel compared to their previous offerings. I think the competition has reached a point where you could go either way depending on what your need in your workload. There is no one answer for every use case.
 
Anyone with a 12900k needs to optimize the voltage themselves. My 12900k went from 1.38v to 1.16v at stock all core turbo. CPU package dropped from 240 watts to around 160-170 watts CPU package in Cinebench while still getting the same performance. Temp gets around 70ish while my 5800x gets beyond 85c with the same cooler! This difference in voltage is insane and I’m not sure why Intel jacked up the default voltage like that when it’s beyond necessary to maintain all core 4.9Ghz. ADL is actually much more efficient than reviews suggest. Play with your BIOs!
 
Anyone with a 12900k needs to optimize the voltage themselves. My 12900k went from 1.38v to 1.16v at stock all core turbo. CPU package dropped from 240 watts to around 160-170 watts CPU package in Cinebench while still getting the same performance. Temp gets around 70ish while my 5800x gets beyond 85c with the same cooler! This difference in voltage is insane and I’m not sure why Intel jacked up the default voltage like that when it’s beyond necessary to maintain all core 4.9Ghz. ADL is actually much more efficient than reviews suggest. Play with your BIOs!

This is what I experienced though I want to rule out some things to validate. Curious on what your cooling configuration is and what your temps were on the 12900k prior to your voltage adjustments.
 
This is what I experienced though I want to rule out some things to validate. Curious on what your cooling configuration is and what your temps were on the 12900k prior to your voltage adjustments.
Just a U12a Chromax for the time being. It gets around 80c 30 minutes run in Cinebench R23 on a MSI Z690 EDGE without touching anything.
 
Thanks for the link Vader. Good to know this before hand as it appears to create irreparable damage to the cpu heat spreader.
 
Been researching the stuff I actually figure I need, and the price of the boards keeps getting lower! I'm down to 3 or 4 boards that range from $300 to $400, probably lower when they're on sale, that'll do everything I'm likely to ever need. There's a video on Youtube somewhere (I have no idea what the link was), that went into excruciating detail on vrms, heat sinks, heat pipes, fins, different capacitors, complete with charts, graphs and a lot of opinions. The bottom line was that spending more than $250-300 for a motherboard quickly hit the point of diminishing returns. Essentially, as price goes down, quality of heatsinks, heat protected areas, capacitors, and all the stuff that controls power and keeps things cool goes down. BUT, a $250 board today is "drastically better" than the same $250 board a few years ago. So, I'll most likely buy a more modest board - something low/mid range...

At this point it's down to tiny differences that's 99% opinion - stuff like "Gigabyte has lousy BIOS" and "ASUS has plastic mounts for M.2 boards so you don't have to mess with tiny screws", and trivial stuff like that.

Now I just need a WHOLE LOT of DDR5 memory to hit the market so prices aren't utterly ridiculous!

I'm going to sit on things and see what happens in the next few weeks. Waiting for memory. If I CAN'T get DDR5 in a reasonable time, I might as well switch and build a 5900X or 5950X and use DDR4... But if I do, I'll always be wondering of that set of 12, 5-shot HDR images from the D850 would have rendered a lot faster if I'd built the 12900 system instead.
 
Just to give an update to my cooling concerns with the 12900k..it appears to be a defective AIO in my new Cooler Master NR200p MAX. I've transplanted my 5950x system and it's running hot as well. I will be validating on Friday by replacing the cooler with a known good AIO NZXT z63 I have to make absolutely certain that's what it was. Fingers crossed.
 
Interesting. My 5950x on a 360mm AIO would hit mid 50s, low 60s under a load. The 12900k I'm on right now under the same load is low 60s to high 70s with the same 360mm AIO. I would think a 280mm AIO should be fine honestly.
This is because AMD's target 68c as a temperature, throttling their clocks to maintain this temperature. You can override this of course using PBO or manually.
 
OK, y'all, back to questions....... I've returned to earth with an unpleasant thump! DDR5 is ridiculous, and is going to STAY ridiculous for a while - and I need 64GB of the &^%$#@ stuff! By my estimate, I can build a 12900 system with DDR4 memory, USE IT for 6 months or a year, and when DDR5 stops being so stupid expensive, buy a DDR5 board and memory, sell the other one, and STILL be several HUNDRED CHEAPER than buying the DDR5 board and memory once any memory even shows up..........

SO, best reasonable motherboards for 12900 and DDR4, for image processing - it appears pretty demanding image processing - and that's BEFORE I touch video.

I liked ASUS, Gigabyte and MSI when looking at DDR5 mobos. Any great reason I should look elsewhere for DDR4 boards? I recently stepped down from boards like the AORUS Master to the Ultra or even the Pro, or the MSI MPG Carbon. But on ASUS I keep looking and end up rejecting boards below the Strix E gaming for various reasons.

I've GOT to have wifi, and it should be an ATX board. I need a quite a few rear panel Type-A slots (multiple printers, scanner, pen tablet, monitor and a couple things I don't even remember) all using USB ports on the back. It would be NICE to have a lot of front panel USB slots since there's a webcam, Logitech dongle, headset dongle, camera tether, 2 memory card readers and 2 external USB 3 HDD connected to the front. I'm currently pulling extensions from 4-rear panel USB-3 ports to the front of the system so I have enough front-panel plugs... If I have to I can split the single USB 3.2 header and add an 8-usb3 port front panel for more slots - ASSUMING this doesn't screw up speeds on the usb3 ports.

I plan to overclock to a reasonable extent, but nothing insane and I'll be HOPING to stay with my Noctua NH-D15 tower cooler rather than going to the hassle of water cooling.
---
I'm ALSO looking at memory - I need 64GB and want it in TWO sticks, not 4. And I'd MUCH prefer memory that just does a great job of being MEMORY - FAST memory - not putting on a light show. The fastest I can reasonably get, preferably with some reasonable overclocking capability - 3600 worst case, preferably something 4000 or faster.

AND, of course, this is going to happen QUICKLY - 'cause otherwise I'll start thinking about it and decide to wait 3 months for DDR5 to be available and 3 more months for it to get to a reasonable price and in six months I STILL won't have built a system......

So, best choices?
 
I liked ASUS, Gigabyte and MSI when looking at DDR5 mobos. Any great reason I should look elsewhere for DDR4 boards? I recently stepped down from boards like the AORUS Master to the Ultra or even the Pro, or the MSI MPG Carbon. But on ASUS I keep looking and end up rejecting boards below the Strix E gaming for various reasons.
Not really.
I've GOT to have wifi, and it should be an ATX board. I need a quite a few rear panel Type-A slots (multiple printers, scanner, pen tablet, monitor and a couple things I don't even remember) all using USB ports on the back. It would be NICE to have a lot of front panel USB slots since there's a webcam, Logitech dongle, headset dongle, camera tether, 2 memory card readers and 2 external USB 3 HDD connected to the front. I'm currently pulling extensions from 4-rear panel USB-3 ports to the front of the system so I have enough front-panel plugs... If I have to I can split the single USB 3.2 header and add an 8-usb3 port front panel for more slots - ASSUMING this doesn't screw up speeds on the usb3 ports.
Anytime you use a USB hub you are taking bandwidth and dividing it. There is no getting around that. Motherboards also use internal USB hubs for USB ports so you might be dividing ports a second time. Something to think about.
I plan to overclock to a reasonable extent, but nothing insane and I'll be HOPING to stay with my Noctua NH-D15 tower cooler rather than going to the hassle of water cooling.
Don't expect much on this front. Cheaper motherboards have cheaper voltage controllers. They do not always offer the granularity to really dial in the overclock. Even if you did, they aren't the most amazing overclockers. You can sometimes get the P cores to their maximum boost clock and sustain it, but don't expect to exceed that. Even then, you are unlikely to achieve this at all on air as this is a 240w+ CPU under full load without being overclocked. If you start overclocking it, you will go beyond that pretty easily.
---
I'm ALSO looking at memory - I need 64GB and want it in TWO sticks, not 4. And I'd MUCH prefer memory that just does a great job of being MEMORY - FAST memory - not putting on a light show.
RGB LEDs tend to be on the fastest memory modules. Even when they aren't, it's not as though RGB LED's cost much if anything, nor do they take away from performance. It isn't as if one precludes the possibility of the other. As for 2 modules versus 4, it doesn't really matter on the Intel side. Intel's IMC doesn't really care unless you are shooting for extremely high memory clocks.
The fastest I can reasonably get, preferably with some reasonable overclocking capability - 3600 worst case, preferably something 4000 or faster.

AND, of course, this is going to happen QUICKLY - 'cause otherwise I'll start thinking about it and decide to wait 3 months for DDR5 to be available and 3 more months for it to get to a reasonable price and in six months I STILL won't have built a system......

So, best choices?
Define "reasonable." It's all cheap compared to DDR5, but you can go with DDR4 RAM that's 5133MHz if you want to. Once DDR5 is available, the price will drop. It's already dropping from the scalpers who find it isn't selling well. It's already down to around $800 for a 32GB kit of DDR5 5200MHz or even 5600MHz. A bit more than a week ago it was all $1,200 and higher.
 
In an unrelated question – I watched a long video about motherboards. In it, they guy stated that he LIKES motherboards with debugging LEDs and he downrated every board that didn’t have them. I’ve always HAD motherboards that gave me numeric codes when there were problems. I’m seeing a lot of DDR4 motherboards (and possibly DDR5 ones) that no longer HAVE the debugging display. I don’t see one on the ASUS TUF Gaming Plus or even the Strix-A Gaming. The Gigabyte Aorus Pro does, but the Aorus Elite and Gaming X don’t appear to.

Is this old technology that’s going away, and if so, what’s replacing it? The only thing I could find was something with 4 colored LEDs that seems a lot less useful.

Can you explain a little more about USB and dividing bandwidth? For example, if there’s a USB3 header on the motherboard, it says it can handle 2 Type-A ports… I presume the bandwidth is divided between THOSE two ports. But, does this USB header share anything with the USB 3 ports on the back panel? OR, if there are 4 USB3 gen 1 Type-A ports on the back panel do they share bandwidth?

If I plug that one header on the motherboard into an 8-USB3-port front panel, I presume ALL those ports share the available bandwidth?

Problem is, ALL the DDR4 motherboards I’m looking at – in the $400 and less range that don’t get buried in reviews as having the equivalent of “cheaped out on components to create a $200 motherboard”, only HAVE one USB3 header on the motherboard…
ASUS TUF Gaming Plus, Strix-A Gaming, MSI MPG Edge, MSI Tomahawk, Aorus Pro, Aorus Elite, Gaming X, all appear to be limited to ONE USB3 header for front panel.

In terms of overclocking –I’ve read repeatedly that a $400 motherboard often won’t overclock any better than the $300 one, or in most cases the $200 one. And that with “much” overclocking and a heavy sustained load on a 12900, heat WOULD become an issue if air cooling… If I have the overwhelming urge to overclock to where it’s an issue, I’ll switch to water cooling. BUT, is there some spec for the VRM that indicates that it’ll be better for overclocking? So far I don’t think I’ve seen anything that wasn’t at least around 12+1 and 70A, with some in the 16+1+1 and 90A. The only ones I don’t see much information on are the ASUS boards. And, of course, I run into discussions that state that anything over 10+1+1 is overkill even for a 12900…

As for memory – a couple of the reviews I read indicated that using two slots “clocks faster than 4”. But they were usually embedded in much discussion of single rank vs double rank and latency and so on. Followed by a comment that “we’re talking about splitting hairs”. Are we talking about a 50% difference in performance or a 2% difference?

I don’t CARE if the memory lights up like a carnival, but in one of the memory reviews there was some kvetching because the non-RGB was $200 and the “exact same memory” WITH RGB was $270 – comment was that lights aren’t worth $70.

In terms of memory speed – from what I’ve found so far, up to 3600 is cheap, but as soon as I get over 4000 most memory goes to 2x8GB. There are a few 2x16GB at 4400 but it’s 3 – 4 times as expensive.

As for DDR5 – if I thought it was going to be widely available in a month, it’d be worth waiting. But I’m concerned that we’re talking SIX months before it’s widely available and the scalpers aren’t getting it all. I'd be happy to see the happen real soon, but given what I've seen in the last year, I'm not very confident we're a month away from getting to widely available and normal pricing.
 
In an unrelated question – I watched a long video about motherboards. In it, they guy stated that he LIKES motherboards with debugging LEDs and he downrated every board that didn’t have them. I’ve always HAD motherboards that gave me numeric codes when there were problems. I’m seeing a lot of DDR4 motherboards (and possibly DDR5 ones) that no longer HAVE the debugging display. I don’t see one on the ASUS TUF Gaming Plus or even the Strix-A Gaming. The Gigabyte Aorus Pro does, but the Aorus Elite and Gaming X don’t appear to.
He can have whatever opinion he wants to. The thing about the 4 LED setup is that it's simple. You can take one glance at it and see what the problem is in a no-POST situation. It's true that a debug LED can tell you more, but you have to reference the code with a chart in the book to know what it means and even then, it's not necessarily more helpful. There are a lot more codes, but often you will still see a given code translate to CPU or memory error and it says nothing more than that. When you have a CPU error, you know the motherboard isn't detecting that CPU for some reason. There are only a few things you can do about that. The debug LED works about as well as the idiot lights. That being said, the OLED displays are better and actually do tell you more in that they give a clear indicator in English that's easier to read than the idiot lights and no cross reference is needed.
Is this old technology that’s going away, and if so, what’s replacing it? The only thing I could find was something with 4 colored LEDs that seems a lot less useful.
The four idiot lights are replacing them. In ultra-high end motherboards we are seeing tiny OLED displays that can display errors in plain language. But as I said, the four idiot lights are about as useful as the traditional debug LED readout when you get down to it.
Can you explain a little more about USB and dividing bandwidth? For example, if there’s a USB3 header on the motherboard, it says it can handle 2 Type-A ports… I presume the bandwidth is divided between THOSE two ports. But, does this USB header share anything with the USB 3 ports on the back panel? OR, if there are 4 USB3 gen 1 Type-A ports on the back panel do they share bandwidth?
Bandwidth is not divided on the header. Each of those two USB ports get the full bandwidth that USB standard has to offer. You asked if using a single USB header and using that to add more ports splits the bandwidth and the truth is, it does. What I also stated was that sometimes your onboard USB ports are divided already as sometimes the manufacturers artificially increase the number of available USB ports buy using USB hub chips on the motherboard to split those ports into additional headers. Doing that shares all the bandwidth from everything that's split off of the hub. Anytime you take a single USB port and a hub you divide the bandwidth. This is normally fine as most things don't require the full bandwidth USB has to offer. Storage devices are the only real exception to this rule in most cases.
If I plug that one header on the motherboard into an 8-USB3-port front panel, I presume ALL those ports share the available bandwidth?
That is correct.
Problem is, ALL the DDR4 motherboards I’m looking at – in the $400 and less range that don’t get buried in reviews as having the equivalent of “cheaped out on components to create a $200 motherboard”, only HAVE one USB3 header on the motherboard…
ASUS TUF Gaming Plus, Strix-A Gaming, MSI MPG Edge, MSI Tomahawk, Aorus Pro, Aorus Elite, Gaming X, all appear to be limited to ONE USB3 header for front panel.
The reason behind this is actually rather simple. The chipset itself only supports 14 USB ports. It does so in a variety of configurations meaning the motherboard manufacturer can allocate these pretty much as they see fit, but there are some limitations. You only get so many USB 3.2 Gen 2 ports and that sort of thing. The reason why higher end motherboards can support more ports than that is because they either have internal USB hubs which further divide or multiplex the ports (thus sharing bandwidth) or they added additional USB controllers which eat up PCIe lanes.
In terms of overclocking –I’ve read repeatedly that a $400 motherboard often won’t overclock any better than the $300 one, or in most cases the $200 one. And that with “much” overclocking and a heavy sustained load on a 12900, heat WOULD become an issue if air cooling… If I have the overwhelming urge to overclock to where it’s an issue, I’ll switch to water cooling. BUT, is there some spec for the VRM that indicates that it’ll be better for overclocking? So far I don’t think I’ve seen anything that wasn’t at least around 12+1 and 70A, with some in the 16+1+1 and 90A. The only ones I don’t see much information on are the ASUS boards. And, of course, I run into discussions that state that anything over 10+1+1 is overkill even for a 12900…
I think people misunderstand this concept, but here it goes. The fact is, the CPU is normally the limiting factor in overclocking using ambient cooling methods. By that I mean both air and watercooling. However, higher end motherboards are much more capable overclockers for a variety of reasons, most of which are academic. Again, the CPU usually limits you in terms of your overclocking results. Where this changes is with phase change or LN2 type cooling solutions. At that point, the higher end motherboards far outstrip the cheap ones as they are designed with settings and functions which make that kind of tuning possible in the first place. The voltage controllers are higher end and offer support for more granular tuning of your voltages. A cheap voltage controller will not offer you that capability.

The higher quality voltage controllers will not only give you a greater range of voltages, but allow you to adjust them in finer increments such as 0.1 instead of 0.5 or whatever. Base clock can be adjusted in finer increments due to a superior external clock generator. And then, your VRM's on higher end motherboards are capable of supplying more power than your cheaper boards can. The chokes on a cheaper board might be rated at 50a or 60a but the ones that come on a Maximus Z690 Extreme are rated for 90a each and there are more of them. The end result? Your VRM's run cooler and more efficiently because they do not have to work as hard to do what's required. This in theory means that it should last a lot longer than a cheaper board because it's not running at 80% output all the time. It's the same reason why you don't want a power supply running at maximum output all the time. It can do it for short periods of time but over the long haul it reduces component life.

As for the specifications, it's easy to get bogged down in the details but the fact is that the motherboard manufacturers are not really all that forthcoming with how they design their VRM's and what really works better. ASUS and other manufacturers are often guilty of lying about their voltage solutions and artificially inflating the phase count. For example: The Maximus XI Hero has a four phase design, but it was marketed as a "Twin 8-phase" power design. This marketing was straight bullshit. The design eliminates doublers and what ASUS did was simply use two inductors per phase. The "Twin 8" thing never made any sense even by ASUS logic for what counts as a phase. The reality is, 4+1 or 10+2 or 16+2 etc. really doesn't matter. That's all semantics as the count isn't what matters. It's how the VRM is designed and how much each power stage is rated to handle and what kind of voltage controller is behind it.

Because myself and other reviewers called ASUS on their bullshit, it no longer markets its phases as deceptively as it once did. Instead, it counts the power stages, but not the phases and give you that information. As long as a given model wasn't cheaped out on to the point where it runs super hot, I wouldn't worry about it. The real reason to buy higher end motherboards usually has to do with the need or desire for its additional features. Those things are nicer audio, more M.2 slots, more USB ports or fluff features like aesthetic qualities or longevity, because you don't want a board that's phases are working at the edge of their capacity 24/7. Some people just like nice things. Whatever the reason, phase count is generally the least among them. Motherboard makers give you the bare minimum to handle every processor available at the time you bought it. However, nicer boards can sometimes get BIOS updates for CPU's that lower end ones can't run, due to their cut rate VRM's. Doesn't happen often, but it can and does happen. Even Intel's own boards have had that problem from time to time.
As for memory – a couple of the reviews I read indicated that using two slots “clocks faster than 4”. But they were usually embedded in much discussion of single rank vs double rank and latency and so on. Followed by a comment that “we’re talking about splitting hairs”. Are we talking about a 50% difference in performance or a 2% difference?
The difference is a small but repeatable one. It's in the 1-3% range if I recall the data correctly. I do not know as I haven't run that particular test myself.
I don’t CARE if the memory lights up like a carnival, but in one of the memory reviews there was some kvetching because the non-RGB was $200 and the “exact same memory” WITH RGB was $270 – comment was that lights aren’t worth $70.
It depends. Almost all of the super high end memory modules with the fastest rated speeds and the best timings have RGB lighting on them. There are some models that don't, but it often does. That said, if there is a price difference but no spec difference, then get the cheaper one if you don't care about the lighting. The point I'm making here is that the lighting comes on the best kits in most cases and your not really paying for lights. You are paying for the memory IC's and superior binning.
In terms of memory speed – from what I’ve found so far, up to 3600 is cheap, but as soon as I get over 4000 most memory goes to 2x8GB. There are a few 2x16GB at 4400 but it’s 3 – 4 times as expensive.
This is something you have to get used to. DDR4 rated for 3600MHz and below is quite common and cheap. There is no difference in manufacturing between the cheaper and more expensive stuff. However, very few of the IC's made can actually reach those higher speeds. Making memory or any semi-conductor is a little like baking. You can put in the same ingredients every time but some batches come out better than others. Some cookies will have more chocolate chips than others or some might have burnt edges, or whatever. The point is while they all do the same thing, they do not all come out of the oven equal. The best ones go for a premium because there is demand for the higher end part and they are much rarer. It's even more rare in higher density IC's to get the same level of performance. That's why the price for 16GB modules over 4000MHz are so insane. Few memory IC's can achieve that.
As for DDR5 – if I thought it was going to be widely available in a month, it’d be worth waiting. But I’m concerned that we’re talking SIX months before it’s widely available and the scalpers aren’t getting it all. I'd be happy to see the happen real soon, but given what I've seen in the last year, I'm not very confident we're a month away from getting to widely available and normal pricing.
A supply hit in mid-December as rumored. It provided some relief as I was able to find DDR5 for my setup. Even the scalpers are lowering their prices. Unfortunately, retailers have raised theirs and DDR5 still isn't widely available. That being said, like GPU's they are in fact out there. But it takes dedication and time to find what you are looking for and not get screwed. I put a bunch of DDR5 kits on my watchlist on eBay and I've had offers sent to me by many if not most of the sellers stating they'll take upwards of a couple hundred less than what the auction was originally set at.
 
Wow! Thanks for the great explanations………

I’ve seen the displays on the $700+ motherboards, and thought the idea of showing the actual errors was great. Problem is I wouldn’t come close to exploiting a motherboard that expensive. If I thought I needed an $800 motherboard I’d buy it, but while I chew up a lot of memory and have periods when the system is 100% busy, it’s not constant for hours. Nor do I plan to overclock the system to the maximum extent it’s capable of.

I’ll not be concerned about the lack of a debugging display and presume if there’s a problem it’ll be easy to figure out what’s wrong from the 4 lights.

I’m likely one of the people that misunderstands what’s involved in overclocking, so thanks for the detailed explanation… So, a real-world question – and I’ll understand if the answer is “not likely without a $700 motherboard”… From looking at tests others have done (taken with a grain of salt), the 12900 can “easily” be overclocked to 4.9 GHz on air cooling. Presuming I get an average 12900 CPU, is it reasonable to expect to do that with a mid-range (those 60-70A boards) DDR4 motherboard like a $270 AORUS Elite, $300 MSI MAG Tomahawk or $350 ASUS Strix-A gaming? Or is even that likely to require a $600 -800 motherboard?

Again, thanks for the info on memory. I knew 3600 was the “sweet” spot on DDR4, and presumed when the memory gets built, they test it and most get thrown in the 3600 bin, the bad stuff gets thrown in the 3200 bin, the 10% of better stuff gets thrown in the 4000 bin, and the top few percent gets into 4400 bin. And things get priced accordingly.

I’ve looked at Ebay occasionally for memory, but the whole place seems so seedy these days that it doesn’t inspire confidence. It’s like buying a TV that “fell off a truck” from a buy in an alley. That’s OK for a $20 something, but I’m not sure I want to buy $1000 worth of memory that way.

Since you were aware of the December supply hit, do you have any feel for when supply is likely to open up to a reasonable level? Are we looking at January or June? Or even the end of 2022 or longer?
 
Back
Top