I, like many others, am in the situation that I'm sitting on a slightly aged (but well performing 24/7 overclocked) intel quadcore 6700k ,and considering upgrading.
I haven't owned an AMD cpu since the Athlon 1800+ and ABIT were royalty, and I was cautiously set on getting a 3000-series ryzen this time.
For the very sad reason that my dog was sick and old and needed looking after, I had all the time in the world to sit with her and read all the major reviews in detail, and i noticed some interesting things.
Of course, disclaimer, all this could change with bios upgrades, windows patches etc, but as it looks now;
If you just scratch the surface of reviews, there so so much hype and it'll look like AMD caught up, whipped intel, out-cored them, all this with better power efficiency, price etc etc. The fact that Intel is still slightly ahead in games is acknowledged, but considered unimportant because intel is just 5% ahead or so (according to LTT video average, for example). It might be a bit more than 5%, and certainly more overclocked as the 3000-series hardly overclocks at all.
Many reviews did not include overclocked gaming performance.
This fact alone is pretty big. Many people only do low-intensive tasks or work on their computers, and the only demanding task they do is play games.
5% (+ optional overclock) for the main task while not having to worry about the existing and still not fixed issues with chiplet/cluster latency is pretty solid (LTT vid at this point in time describes inconsistencies probably caused by this ) . This will probably be optimized in windows scheduler and/or in newer games, but it is not today.
The new xbox and playstation will also use this architecture, so it will help prioritize solving the issues but it might not happen fast. (And it can never be completely solved since its a physical design, only software workarounds can be created).
Next, lets look at power efficiency. The pre-release hype was big on this, but lets look at benchmarks.
First of all, many review sites use blender. and use a fixed load to process, such as "rendering the gamers nexus logo" etc.
I noticed that in blender, the ryzen 3000 simply outperformed the intel lineup.
https://www.sweclockers.com/test/27760-amd-ryzen-9-3900x-och-7-3700x-matisse/17#content
Here ryzen3000 outperforms the intel lineup by a fair amount. But for a fixed workload, efficiency is just slightly better:
https://www.sweclockers.com/test/27760-amd-ryzen-9-3900x-och-7-3700x-matisse/27#content
The GN test have it as 87 for the 3700x stock vs 91 for the 9900k stock. between several benchmarks like this, it seems that in blender, a best case ryzen3000 scenario, the power efficiency is equal or less than the figures it also outperforms the intel lineup in.
Best case it looks like about 9% better efficiency-wise. (also, blender is probably a bad benchmark to use as it performs better on gpu or hybrid; https://blog.render.st/blender-2-8-hybrid-cpugpu-rendering-speed-and-quality/ )
So did anyone test real world scenarios for power efficiency? one of the few was https://www.techpowerup.com/review/amd-ryzen-7-3700x/18.html
Here we can still see stress test slightly in ryzen3000's favor, but everything real-world intel is ahead (of course, full load over all cores can be a real world scenario but probably more rare on consumer parts).
To sum it up, ryzen 3000 might have a small efficiency advantage in full load all core workloads, and be slightly behind in everything else. Its very equal overall and seems to just have been hype/marketing.
Considering we are comparing 7nm to 14nm, a case can be made that matisse is either inherently extremely power hungry, or simply doesn't make use of the 7nm process power advantages.
Which brings us to temperatures. It's almost like the review kit asked reviewers to leave this out. Almost no none published temperature results.
For those that did, it looks quite bad, and might be the main reason the ryzen3000-series doesn't overclock almost at all.
Those tightly packed 7nm cores might just be really hard to cool, as power efficiency seems about equal.
These are the only two reviews i saw with temperatures, and the techspot one didn't include an intel reference.
https://www.techpowerup.com/review/amd-ryzen-9-3900x/19.html
https://www.techspot.com/review/1869-amd-ryzen-3900x-ryzen-3700x/
Personal thoughts
I am still strongly considering a 3900x as it is a unique 12-core product for a great price, and the chiplet/cluster issues might very well be optimized in the future.
If i decide against the 3900x i see no reason to buy the rest of the 3000 lineup, and will either go for a 9900k or wait for the next generations and re-evaluate as the 6700k is really still quite strong. The minor difference in price simply does not seem worth gambling on the chiplet/cluster design for.
I haven't owned an AMD cpu since the Athlon 1800+ and ABIT were royalty, and I was cautiously set on getting a 3000-series ryzen this time.
For the very sad reason that my dog was sick and old and needed looking after, I had all the time in the world to sit with her and read all the major reviews in detail, and i noticed some interesting things.
Of course, disclaimer, all this could change with bios upgrades, windows patches etc, but as it looks now;
If you just scratch the surface of reviews, there so so much hype and it'll look like AMD caught up, whipped intel, out-cored them, all this with better power efficiency, price etc etc. The fact that Intel is still slightly ahead in games is acknowledged, but considered unimportant because intel is just 5% ahead or so (according to LTT video average, for example). It might be a bit more than 5%, and certainly more overclocked as the 3000-series hardly overclocks at all.
Many reviews did not include overclocked gaming performance.
This fact alone is pretty big. Many people only do low-intensive tasks or work on their computers, and the only demanding task they do is play games.
5% (+ optional overclock) for the main task while not having to worry about the existing and still not fixed issues with chiplet/cluster latency is pretty solid (LTT vid at this point in time describes inconsistencies probably caused by this ) . This will probably be optimized in windows scheduler and/or in newer games, but it is not today.
The new xbox and playstation will also use this architecture, so it will help prioritize solving the issues but it might not happen fast. (And it can never be completely solved since its a physical design, only software workarounds can be created).
Next, lets look at power efficiency. The pre-release hype was big on this, but lets look at benchmarks.
First of all, many review sites use blender. and use a fixed load to process, such as "rendering the gamers nexus logo" etc.
I noticed that in blender, the ryzen 3000 simply outperformed the intel lineup.
https://www.sweclockers.com/test/27760-amd-ryzen-9-3900x-och-7-3700x-matisse/17#content
Here ryzen3000 outperforms the intel lineup by a fair amount. But for a fixed workload, efficiency is just slightly better:
https://www.sweclockers.com/test/27760-amd-ryzen-9-3900x-och-7-3700x-matisse/27#content
The GN test have it as 87 for the 3700x stock vs 91 for the 9900k stock. between several benchmarks like this, it seems that in blender, a best case ryzen3000 scenario, the power efficiency is equal or less than the figures it also outperforms the intel lineup in.
Best case it looks like about 9% better efficiency-wise. (also, blender is probably a bad benchmark to use as it performs better on gpu or hybrid; https://blog.render.st/blender-2-8-hybrid-cpugpu-rendering-speed-and-quality/ )
So did anyone test real world scenarios for power efficiency? one of the few was https://www.techpowerup.com/review/amd-ryzen-7-3700x/18.html
Here we can still see stress test slightly in ryzen3000's favor, but everything real-world intel is ahead (of course, full load over all cores can be a real world scenario but probably more rare on consumer parts).
To sum it up, ryzen 3000 might have a small efficiency advantage in full load all core workloads, and be slightly behind in everything else. Its very equal overall and seems to just have been hype/marketing.
Considering we are comparing 7nm to 14nm, a case can be made that matisse is either inherently extremely power hungry, or simply doesn't make use of the 7nm process power advantages.
Which brings us to temperatures. It's almost like the review kit asked reviewers to leave this out. Almost no none published temperature results.
For those that did, it looks quite bad, and might be the main reason the ryzen3000-series doesn't overclock almost at all.
Those tightly packed 7nm cores might just be really hard to cool, as power efficiency seems about equal.
These are the only two reviews i saw with temperatures, and the techspot one didn't include an intel reference.
https://www.techpowerup.com/review/amd-ryzen-9-3900x/19.html
https://www.techspot.com/review/1869-amd-ryzen-3900x-ryzen-3700x/
Personal thoughts
I am still strongly considering a 3900x as it is a unique 12-core product for a great price, and the chiplet/cluster issues might very well be optimized in the future.
If i decide against the 3900x i see no reason to buy the rest of the 3000 lineup, and will either go for a 9900k or wait for the next generations and re-evaluate as the 6700k is really still quite strong. The minor difference in price simply does not seem worth gambling on the chiplet/cluster design for.