How has Zen3 managed to maintain it's price after over a year?

That very well might be the case. In a related story, now several other retailers in addition to MC now also have the 5800x on sale. Being as since the 5800x has always been the most available of the Zen3 lineup, it would make sense for it to be the first one on sale of course, but I can't help but wonder if they are making room for something else...
The 5800X that's coming with the stacked 3d cache right? To be fair, the 5800X was the most overpriced chip in the lineup anyways.
 
Plus a lot of people are still upgrading old processors, heck I plan to upgrade my 3900X soon.
I'm debating with myself do also upgrade my 3900X or wait for the next gen CPU. Of course that means a new motherboard and probably 64 GB RAM, plus the CPU, and my "Chief Financial Officer" might not "sign off" on the purchase.
 
I'm debating with myself do also upgrade my 3900X or wait for the next gen CPU. Of course that means a new motherboard and probably 64 GB RAM, plus the CPU, and my "Chief Financial Officer" might not "sign off" on the purchase.
Yeah, I need more processing power than my 3900X offers, but I also need at least 64GB ram. I know my next full system upgrade is gonna be a super, super expensive one which is why the idea of a 5950X (3D cache) sounded appealing to me. Also I can put my 3900X into my wife's system - currently running my old R7 1700.)
 
I know my next full system upgrade is gonna be a super, super expensive one which is why the idea of a 5950X (3D cache) sounded appealing to me.
Yeah me too. I am leaning towards upgrading my present rig to 64 GB, then waiting a year, maybe 18 months, and getting a next gen AMD CPU. I was thinking about Threadripper because they are quad-channel. Then I saw the prices of 5000 series TR. So maybe a TR is not in my future.
 
Listening to you guys talking about upgrading:

I've only got a 3700X. For my purposes it's fine. Though if I plan to keep this rig for the next two years: I've got to upgrade to ECC RAM.

That to me seems the fulcrum point... I'm playing roulette with 32 gigs. With 64 gigs it's going to be a must.

I might swap out for a 5900 at some point. But for the development, virtualization, signal processing, and gaming I do- it's still more than I need.

Though I'm going to look closely at the new line of processors when they come out.
 
Thanks guys, core count does make sense WRT overall market demand. In my case I am just looking at single threaded performance.
Most aren't in this day and age. Most [H] are playing games @ 1440p+ where the GPU is the determining factor or doing rendering type loads where multi core matters. What single thread application may I ask would be worth a whole new platform that provides a marginal bump up compared to a capable drop in CPU upgrade?
 
Most aren't in this day and age. Most [H] are playing games @ 1440p+ where the GPU is the determining factor or doing rendering type loads where multi core matters. What single thread application may I ask would be worth a whole new platform that provides a marginal bump up compared to a capable drop in CPU upgrade?
If you have a powerful GPU then single threaded performance is still important at 1440p, especially for Nvidia which has a weak scheduler and puts a lot of load on the CPU. A lot of applications use a finite number of threads so single threaded performance has a relation to performance per core and is still important. The gains per generation have been fairly large on AMD and Alder Lake also had good gains. After 2-3 generations there will be a significant gap at this pace.
 
AMD still offers more raw performance cores to throw at workloads. Also ADL performs best with Windows 11.

I think once you dig deeper into what ADL offers and how it works, you can see why many people are hesitant to try something so new, that also really requires Windows 11 for the best experience.

AMD is solid, runs great on W10 and older hardware, and runs on ddr4. I think that sums up why Intel would have dropped prices.


From what I have seen and remember, Alder Lake is superior on W11 only on specific situations. For normal person and especially a gamer the difference is basically null, you can safely use Alder Lake and enjoy good gaming performance.
 
Last edited:
(note: prices are sale prices from MC):

Current pricing for a 12-core R9 5900x is ~$500, while an 8-core R7 5800x is ~$350.
Current pricing for a 16-core 12900K is ~$549, while a 12-core 12700k is ~$350.

Zen 3 is over a year old at this point, while ADL is new, and arguably a faster series. How in the heck has Zen3 managed to not come down in price compared to ADL?

Would it be better to ask why ADL chips are so cheap? (I'm guessing DDR5 availability will be part of the answer here, but there are also DDR4 boards, so...)

Background: currently rocking a Zen2 R5 3600x and considering an upgrade, but it doesn't seem to make much sense to pay the same for a Z3 chip as I could for an ADL with faster speed.


I am in a same boat and want to upgrade my 3600. I find it stunning that AMD has decided to keep 5000 serie prices high considering the competition has better products for cheaper. I refuse to pay an inflated price and rather do an overhaul for my rig even if it would cost me more overall.

Could it be that 5000 serie is so expensive to make because they have to have every core "P-cores?" There is no margin to do big drops?
 
From what I have seen and remember, Alder Lake is superior on W11 only on specific situations. For normal person and especially a gamer the difference is basically null, you can safely use Alder Lake and enjoy good gaming performance.
The scheduler is different in w11. I’ve used both, I have ADL, and I could tell the difference. Other people may not though. Basically w11 is better optimized to utilize the E cores. If you disabled the cores (which many gamers on w10 do) then yes, there wouldn’t be much of a difference.
 
If you have a powerful GPU then single threaded performance is still important at 1440p, especially for Nvidia which has a weak scheduler and puts a lot of load on the CPU. A lot of applications use a finite number of threads so single threaded performance has a relation to performance per core and is still important. The gains per generation have been fairly large on AMD and Alder Lake also had good gains. After 2-3 generations there will be a significant gap at this pace.
Who is talking 2-3 generations? :rolleyes:
My point stands that it is minimal to nonexistant @ 1440p+ gaming when we are talking about an effortless CPU upgrade in comparison.
 
Who is talking 2-3 generations? :rolleyes:
My point stands that it is minimal to nonexistant @ 1440p+ gaming when we are talking about an effortless CPU upgrade in comparison.
Depends on the game. Difference between a 3900x and a 12700k can be as much as 30-40% at 1440p in some games with a 3080. In techpowerups chart the 12700k is on average 20% faster at 1440p with a 3080, which is quite significant. the 5000 series ryzen are mostly close, but the 3000, 2000 and 1000 are starting to get old if you have a current gen high end GPU.
 
Depends on the game. Difference between a 3900x and a 12700k can be as much as 30-40% at 1440p in some games with a 3080. In techpowerups chart the 12700k is on average 20% faster at 1440p with a 3080, which is quite significant. the 5000 series ryzen are mostly close, but the 3000, 2000 and 1000 are starting to get old if you have a current gen high end GPU.
We are ONLY talking the 5000 series here if you had been following the thread.
I would like your link of claimed of 30-40% at 1440p. (Edit-I see you snuck a 3000 series in for your comparison) It absolutely isn't the norm and if one games @ 4K or VR it is non existent.
OP still hasn't replied with what his single thread app is so it may not be gaming. E-Sports @1080p then sure go for a new rig if that floats your boat but even then hardly worth it compared to a drop in CPU upgrade IMHO.
 
Last edited:
I am in a same boat and want to upgrade my 3600. I find it stunning that AMD has decided to keep 5000 serie prices high considering the competition has better products for cheaper. I refuse to pay an inflated price and rather do an overhaul for my rig even if it would cost me more overall.

Could it be that 5000 series is so expensive to make because they have to have every core "P-cores?" There is no margin to do big drops?

It's a frustrating choice, isn't it? I was expecting the financial decision to be more obvious between a one+ year old CPU upgrade and a new combo, but they both come in pretty close. In my case I ended up going with the 5800x on sale for $349, and while I am still collecting my benchmarks, I'm seeing significant gains (>25%) over my 3600, so I am very happy with this decision. I also got a good price selling the 3600 which makes it even better.

I did have to buy another fan for my HSF (scythe mugen 5), but that's to be expected with the 5800.
 
I bought my 5900x for $500 in August when supply was sporadic and usually more expensive, and then finally put everything together at Christmas. Now, I could have sold it and then gone with Alderlake, but my logic was that I already have the part, it’s mature, don’t really have to hunt for expensive 1st gen DDR5 and mobo’s. I understand that I’d have to build my next system on new platform, but by the time I’m ready to, there will be a whole new generation of parts anyway.

I mean, I was on a quad core 6700k, Alderlake or zen 3, it was all going to be a huge jump that will last me 3 years at a minimum. I pay more attention to GPU’s anyway after getting the main guts together….

Intel keep the price lower than amd to account for the cost of entry ie ddr5 and new mobo’s. AMD don’t have to lower prices yet until they are ready to release zen 4, while letting ddr5 manufacturers to get a hang of things during initial market release. Zen 3 is still selling regardless…no need for them to mess with the steady income stream yet I suppose until they determine zen 4 is ready for introduction.
 
don’t really have to hunt for expensive 1st gen DDR5

Yep, that was the deciding factor for me. It's one thing to spend a little more to get the latest platform, but with DDR5 being unobtanium at a reasonable price, buying a dead-end DDR4-based ADL setup just doesn't make any sense. #happywitham4
 
last things i need to max out my current rig is that 8TB Sabrent Rocket Plus Gen 4 nvme to replace the 4tb in there now. and a 3090ti then iv maxed out what this platform can handle. ill grab a 5800x3D for my Side Project Ghost S1 Linux Build and a 3070FE if i can find one for under $1000
 
All this talk about expensive DDR5. I buitl an ADL setup with nice DDR4 RAM after much research. CURRENT offerings of DDR5 are not faster or not much faster at all, than high end DDR4. So that was a easy decision for me. If DDR5 and newer boards become affordable in the next few years I can simply pop in my 12700k.

Or as usual, I will upgrade in a generation or two, and the current price of DDR5 will not matter.
 
I was planning on getting a 5800x based system as I mostly game but got a banging deal on the 5900x from a forum member here. If the "new" 5800x with the vcache is an improvement in games I might get one when it releases... I really did not need the 12 cores...lol.

ADL was just a no go for me too. DDR5... expensive MOBO.. upgrade to Win11? All made a Zen3 based system more attractive.
 
I know not everyone has access to microcenter, but they have the 5600x for $239 and the 5600G for $199. These are in store only prices
 
I know not everyone has access to microcenter, but they have the 5600x for $239 and the 5600G for $199. These are in store only prices

This brings up another point that I came across in my decision to buy a 5800; are six cores enough at this point? I can't help but wonder if some of the improvement I am seeing in the new CPU are due to the extra couple of cores, same as when I went from a C2D to a QC processor.
 
This brings up another point that I came across in my decision to buy a 5800; are six cores enough at this point? I can't help but wonder if some of the improvement I am seeing in the new CPU are due to the extra couple of cores, same as when I went from a C2D to a QC processor.
Define enough ?

Because of binning and higher core count CPU having usually more cash/hz on the best core, it could be hard to know how much of a difference there is, but a 12600k with 6 core tend to beat most 8-10-12 core cpu in most game.

Some tester did the test:


a 10900k using only 6 core will often beat a 10600k using also 6 core by a good margin, which for many make the conversation a bit mute, if you do not leave any GPU performance on the table you need to go up in the CPU price regardless if core number are relevant to get better binning and more cache, if be enough you do not mean is there any GPU performance left on the table by CPU, but can I game AAA title at 1440p very high comfortably without a significant difference because I have "just" a 12600K I think the answer is a full yes, 6 core is quite enough.
 
Last edited:
This brings up another point that I came across in my decision to buy a 5800; are six cores enough at this point? I can't help but wonder if some of the improvement I am seeing in the new CPU are due to the extra couple of cores, same as when I went from a C2D to a QC processor.
Prior to coming back to AMD I was running an older Coffee Lake 8086k@5Ghz. I dropped a [email protected] into it to find out the answer to that very question.

Did not see a tangible difference in games pushing 1080ti's in SLI or a 3080ti.

IMHO, 6 decent cores running at speed are likely enough unless you are pushing maximum pixels at 4k resolutions.
 
Prior to coming back to AMD I was running an older Coffee Lake 8086k@5Ghz. I dropped a [email protected] into it to find out the answer to that very question.

Did not see a tangible difference in games pushing 1080ti's in SLI or a 3080ti.

I just ran a test with two cores disabled in the msconfig boot menu (confirmed after a reboot in HWInfo) and my results also bore this out; less than a 2% increase in frames using 8 cores.
 
I just ran a test with two cores disabled in the msconfig boot menu (confirmed after a reboot in HWInfo) and my results also bore this out; less than a 2% increase in frames using 8 cores.
Now re-run that test "dirty" (antivirus running, maybe a few browser tabs, rendering video, etc.)
 
It still costs quite a lot more to build a 12900k from scratch. And there are still places a 5900x wins. I'm going to keep harping on this. Intel's power consumption and cooling requirements for the i9-12900k are stupid. Just straight up idiotic. To get the full beans out of these CPUs you have to run at least 360mm triple fan radiator liquid cooling.

You can still (barely) make a 5900x or 5950x fly on a Noctua D15 or similar.

The only one of the new Intel's you can really put a D15 size cooler on and get full perf is the i5-12600k. That's absurd.

So if you want to run DDR4 and AIR COOLING to build a simpler, more reliable, cooler and reasonable cost level 12700k or 12900k... you are going to be giving up enough perf that a 5900x or 5950x is going to be right there on the charts.

Yes, Intel will smooth this out, but this generation isn't it. I'm still shocked at how many people are on this particular hype train even after the launch reviews hit. ADL is not impressive UNLESS you allow the power consumption and heat to go through the roof. And Intel just acts like this is somehow normal now. This kind of power consumption has never been "normal."

i9-12900k and RTX 3080 or higher graphics card = legitimate need for a 1000W+ power supply because you can easily run a workload that needs 700w constant without any major overclocking. Not spike, CONSTANT draw. 250w CPU, up to 350w video, and 100w of fans, drives, SSD, RAM power usage, etc etc... 700 watts. Crazy.

And for the record, I nearly decided against the 5900x because it bumps right up against the 180w mark and that's about as much as you can air cool while getting full performance.

And don't get me started on the RTX 3080. First thing I did was rework my voltage curve to get more performance at less voltage and power draw. The thing is a furnace at stock settings. Things need to change here. This is the wrong direction big time, and people have not realized it yet. They will soon.
 
It still costs quite a lot more to build a 12900k from scratch. And there are still places a 5900x wins. I'm going to keep harping on this. Intel's power consumption and cooling requirements for the i9-12900k are stupid. Just straight up idiotic. To get the full beans out of these CPUs you have to run at least 360mm triple fan radiator liquid cooling.

You can still (barely) make a 5900x or 5950x fly on a Noctua D15 or similar.

The only one of the new Intel's you can really put a D15 size cooler on and get full perf is the i5-12600k. That's absurd.

So if you want to run DDR4 and AIR COOLING to build a simpler, more reliable, cooler and reasonable cost level 12700k or 12900k... you are going to be giving up enough perf that a 5900x or 5950x is going to be right there on the charts.

Yes, Intel will smooth this out, but this generation isn't it. I'm still shocked at how many people are on this particular hype train even after the launch reviews hit. ADL is not impressive UNLESS you allow the power consumption and heat to go through the roof. And Intel just acts like this is somehow normal now. This kind of power consumption has never been "normal."

i9-12900k and RTX 3080 or higher graphics card = legitimate need for a 1000W+ power supply because you can easily run a workload that needs 700w constant without any major overclocking. Not spike, CONSTANT draw. 250w CPU, up to 350w video, and 100w of fans, drives, SSD, RAM power usage, etc etc... 700 watts. Crazy.

And for the record, I nearly decided against the 5900x because it bumps right up against the 180w mark and that's about as much as you can air cool while getting full performance.

And don't get me started on the RTX 3080. First thing I did was rework my voltage curve to get more performance at less voltage and power draw. The thing is a furnace at stock settings. Things need to change here. This is the wrong direction big time, and people have not realized it yet. They will soon.
I agree with a lot of what you said. Not sure I could run any of the ADL and a 3090 in any SFF though, the cooling on that would just be massive.
 
It still costs quite a lot more to build a 12900k from scratch. And there are still places a 5900x wins. I'm going to keep harping on this. Intel's power consumption and cooling requirements for the i9-12900k are stupid. Just straight up idiotic. To get the full beans out of these CPUs you have to run at least 360mm triple fan radiator liquid cooling.

You can still (barely) make a 5900x or 5950x fly on a Noctua D15 or similar.

The only one of the new Intel's you can really put a D15 size cooler on and get full perf is the i5-12600k. That's absurd.

So if you want to run DDR4 and AIR COOLING to build a simpler, more reliable, cooler and reasonable cost level 12700k or 12900k... you are going to be giving up enough perf that a 5900x or 5950x is going to be right there on the charts.

Yes, Intel will smooth this out, but this generation isn't it. I'm still shocked at how many people are on this particular hype train even after the launch reviews hit. ADL is not impressive UNLESS you allow the power consumption and heat to go through the roof. And Intel just acts like this is somehow normal now. This kind of power consumption has never been "normal."

i9-12900k and RTX 3080 or higher graphics card = legitimate need for a 1000W+ power supply because you can easily run a workload that needs 700w constant without any major overclocking. Not spike, CONSTANT draw. 250w CPU, up to 350w video, and 100w of fans, drives, SSD, RAM power usage, etc etc... 700 watts. Crazy.

And for the record, I nearly decided against the 5900x because it bumps right up against the 180w mark and that's about as much as you can air cool while getting full performance.

And don't get me started on the RTX 3080. First thing I did was rework my voltage curve to get more performance at less voltage and power draw. The thing is a furnace at stock settings. Things need to change here. This is the wrong direction big time, and people have not realized it yet. They will soon.
12900k is the only chip which essentially has no power limit, as its stock/normal behavior. 12700k is just fine at stock settings. And even with power limits unlocked, it isn't worse than the 12 and 16 core Ryzens I've had. Its not until you start tweaking LLC to be more aggressive, that the 12700k gets real toasty. But, Ryzen is the same way.

Techpowerup recorded a Blender load temp for the 12700k, which is lower than their 5900x. Noctua NH-U14s.
https://www.techpowerup.com/review/intel-core-i7-12700k-alder-lake-12th-gen/21.html
 
5 things about the Techpowerup review that immediately interested me:

1) They cooled all the test systems with an Arctic 360mm rad. $130 and you have to buy a case that can fit it.
2) They outfitted all the Intel systems with DDR5-6000. $450 RAM minimum.
3) They outfitted the AMD rig with DDR4-3600 CL16. $155, and that means there were higher FCLK settings available they were likely not running... either through overclocking that RAM or getting some actual DDR4-3800. All without pushing other power intensive overclocking options like PBO at all. But any way you slice it some very high end RAM against some very lower-mid range RAM.
4) Something up with the Blender test... the 5950x was 7C cooler than the 5900x and faster. I know the 5950x is usually binned slighter better anyway but the result is a little outside the normal deviation. Something else was up with the test or Blender just really scales in a way that exaggerates the number of full size cores. But even if not it doesn't change the outcome that much. And it is an incredible showing for the 5950x.
5) All the test systems were run on a Seasonic SS-860W Platinum power supply. You can't buy that model currently but I believe it's obvious they chose that size and type because it is about the smallest wattage you can comfortably run the 12900k on long term. The current Seasonic Focus 850W Platinum is about $180. For argument sake there is a name brand EVGA 850W Platinum for $135. But... if you want a more reasonably priced build you'd HAVE to drop to the 12700k, 5900x, or 5950x so you could safely pick an ordinary 750w bronze supply and still be comfortable running a RTX 3080 class card. $66 for a 750w EVGA bronze. Or if you want to splurge EVGA 750w Gold for $90. These are big differences in price when looking at an entire build.

I'll absolutely grant you that the 12700k is FAR more sane than the 12900k. BUT the best you can say about it is that we are back to the exact same battle we have had for the last couple generations. Intel is holding a roughly 5% gaming performance lead. The 5900x is still edging out the multi core perf against the 12700k. And the only oddity this gen is the 12900k that can take the 5950x in some workloads but only if it's built far more expensively and with power limits completely removed.

If you drop the 12700k to DDR4 to try to equalize the overall build price against a 5900x the performance numbers would probably be near exact parity with the 5900x... Assuming you can actually successfully air cool the 12700k. Otherwise you have to put $50 price difference between the coolers in there too. But I'll just assume it's doable for the sake of argument. No one has had me build them a 12700k yet so I can't speak to it from personal experience. I think this is where the real battle is. These two builds are going to be just about the same price and the same performance overall. Pick your team. Same end result. Although the AMD will probably still cool a bit easier.

But I've done some 12900k systems now... and I can say something else about those in particular. Labor is a thing. And when systems MUST have big liquid coolers to work as intended that's a lot longer build time and more to go wrong later which means higher prices every step of the way.

I'm not hating on Intel here. I've owned one of just about everything since the dawn of PCs. I'm just showing that there's not ACTUALLY anything all that amazing going on with Alder Lake. In the end how it performs for a given price and power consumption against the competition is what matters. And just like last gen... at anything close to equal price points and power usage AMD is still possibly squeaking out a win. Or depending on how you define "value" and what is important to you at least parity.

I'll revisit this when DDR5-6000 costs half as much as it does now, and when Intel puts the power limits back into a sane range for "stock/normal" behavior.
 
Back
Top