AMD Ryzen 9 7950X3D CPU Review & Benchmarks: $700 Gaming Flagship

We don´t, and I doubt that it is true given that literally no one complained about the requirement to spend a few bucks to get a new mounting bracket for their existing coolers back when am4 was introduced. Apart from that, I imagine that increasing socket thickness would be the more obvious solution for that problem.

I find it far more likely that the extra vertical space's primary reason is the cache in the 3d versions of the cpu. If I remember correctly, the previous gen 5800x3d required quite a bit of effort to obtain thinner chiplets so that they could add the cache on top within the existing am4 spec.
The IHS for AM5 is indeed, thicker.

The IHS thickness for AM5x3D is exactly the same (Der8auer measured it in a video today).
 
What's most interesting to me is how the AMD CPU when scaled to a certain W is somehow always hotter than the intel CPU at the same W. I wonder what's going on there.
smaller package makes it significantly harder to dissipate heat due to the lack of surface area. this is the key problem both intel and AMD are going to run into as processes get smaller. the actual heat being dissipated through the IHS is much lower than what the actual cores themselves are running at. it was one of the reasons AMD made the IHS's thicker to act like a heat sponge so to speak since the heat from the dies transfers more efficiently to the IHS then the IHS does to the cooler when going from an idle state to a loaded state.
 
It looks like the 7000 series doesn't benefit quite as much from the extra cache but the results are still nice and the efficiency looks great. It looks like the 7800x3d will be the AM5 cpu to get for gaming and the 7950x is a better value for production but this generally gives you the best of both worlds for not too much more than the 7950x.

I don't really care for the asymmetrical design or especially that it requires software with updated profiles to work properly. I'm not a fan of Intel going big/little and this seems worse due to the extra software requirements and complexity involved in assigning the cores based on the type of task rather than just how demanding it is.

I'm going to think it over some but I'll probably just grab a 5800x3d since it would be a drop in upgrade for me and still isn't far behind the best in gaming. The only thing holding me back is I'll probably upgrade to 32GB of RAM and then I'll be a motherboard away from updating my secondary PC so I'll probably still end up buying just as many components however a DDR4 kit and b450 board would be a lot cheaper than a ddr5 kit and x670 board that I'd get if I made the jump to AM5.

On the Intel side the 13900k is still a top end gaming cpu and the 13600k is a really nice bang for the buck cpu but they're unlikely to have nearly as many upgrade options available as an AM5 system which is a big factor for me when everything else seems fairly equal.
 
I'm
Should I stick with my AM4 5950X?
That's what I'm doing. This thing is impressive, but not $1000+ system upgrade impressive.

Can't wait to see next gen. Especially as things like the thread priority scheduler improve.
 
Don't we already know that the Zen4 chips have an extra thick IHS to maintain compatibility with AM4 coolers? Delidding, or even shaving off a mm or so, takes a huge chunk of temp off.
Great point, I hadn't given that enough thought. I'll just chalk it up to being a bit tired after work today.
IDK what you mean. AMD blows them way in efficiency so not sure what you mean by what he is drinking lmao.

He has no idea what he's talking about. And the closest thing he can up with to a retort is to reference some power scaling numbers from the 45w-higher-TDP 7950X.

AMD could choose to juice the 7950X3D with another 100w of maximum headroom, take away every marginal win Intel currently holds by doing so, and still come in with a lower total power draw.
Wow, so just ignore the link I posted to spoon feed you. Let me spoon feed you actual images then.

130586.png
130586.png


Holy shit, look at that, low single digit performance delta even when power scaled all the way down to 35W on both AMD and Intel. Stop drinking the cool aid. AMD and Intel are both close in performance right now, full stop.
 
Don't we already know that the Zen4 chips have an extra thick IHS to maintain compatibility with AM4 coolers?

Only some of them though. If the cooler requires removing the bracket on the back of the board (a good chunk of them) it still won't work.

Which is kind of a bummer. I was going to try to reuse my old EK Supremacy EVO CPU block, but that is apparently not going to work :(
 
Cool, let me know when we can play this game. In all seriousness, Cinebench has become a meme of a benchmark that spits out numbers that can't be interpreted into actual performance, just like Ashes before it. Seems the benches that favor AMD all end up in the same manner, who would have thunk.
These are scaling charts where the CPUs are basically measured against themselves on a wattage basis. You seem to be dismissive of numbers only when it doesn't fit your narrative.
 
These are scaling charts where the CPUs are basically measured against themselves on a wattage basis. You seem to be dismissive of numbers only when it doesn't fit your narrative.
I understand full well what those numbers mean, and yet unfortunately, they don't really apply to performance deltas when power scaling in any other application. So those numbers only apply to this one specific application and that really doesn't mean much at all when the application doesn't actually have any tangible purpose, now does it.
 
I understand full well what those numbers mean, and yet unfortunately, they don't really apply to performance deltas when power scaling in any other application. So those numbers only apply to this one specific application and that really doesn't mean much at all when the application doesn't actually have any tangible purpose, now does it.
I see, so since warhammer and cinebench don't match up 1:1 - the cinebench scaling numbers are meaningless - and warhammer is the intuitive benchmark for watts to performance? I could barely get that typed out before laughing out loud for real.

While you are scanning the first page for graphs, I was reading the article and the second page for the conclusion:

"The biggest winner in our testing is AMD with the Ryzen 9 7950X. The biggest takeaway from our analysis is that despite curtailing the max power consumption on the Ryzen 9 7950X by over 50%, plenty of performance remains on the table to be tapped into. Seeing the chip retain 80%+ of its stock performance even when the peak power consumption is just 42% is a very encouraging outcome. This makes SFF computing more appealing, even at the higher end."

My conclusion would be that warhammer could be a best case "goldielocks" scenario for the 13900k and other games would probably drop off even more.
 
Last edited:
Cool, let me know when we can play this game. In all seriousness, Cinebench has become a meme of a benchmark that spits out numbers that can't be interpreted into actual performance, just like Ashes before it. Seems the benches that favor AMD all end up in the same manner, who would have thunk.

Cinebench is a good approximation of rendering capabilities of a CPU. Sure GPU rendering is becoming more wide spread in all but the highest end stuff. Still CB does corollate to approximant render horsepower in many engines.

This is in regards to the 7950x. The 13900 is not too far behind... but man look what it is having to suck to get there. In Linux blender render tests its peaking at 300 watts averages 20% more power draw vs AMD and performs 9% slower depending on the scene. I will grant Intel that most people buying chips in this range are not really going to be high end CPU render users. This is what is happening with these Archs in the server space as well... which is why AMD is starting to seriously hurt Intels stock price. If you are running a CPU render farm you would be insane to go Intel right now.
https://www.phoronix.com/review/intel-core-i9-13900k/5

This is from 7950x3D testing. What impresses me here is that in real world blender rendering the 3D cache part is losing very little performance vs the standard 7950x. 7950x3D seems to be the best of both worlds... it has the cache for games, and a few production things that can actually use it. However it still seems to be performing almost identical to the non 3D part in real world rendering and other productivity type tasks. The icing seems to be even more reduced power draw on the 3D. In Phoronix blender 3.4 test... the 3D part is only drawing around 140 watts, compared to the 7950x at around 190 watts... and the 13900 which both parts best is drawing 220 watts peaking at 250. (it does seem like Intel latest kernel firmware has brought the peak power down a bit on the 13900. Now its just insane and not a potential melt down hazard)
https://www.phoronix.com/review/amd-ryzen9-7950x3d-linux/14
 
I understand full well what those numbers mean, and yet unfortunately, they don't really apply to performance deltas when power scaling in any other application. So those numbers only apply to this one specific application and that really doesn't mean much at all when the application doesn't actually have any tangible purpose, now does it.
AMD and intel aren't even close to equal on performance/power right now. There's a lot of data on the net, even on power normalized graphs outside of these Cinebench and AT's game benchmarks you showed (where its entirely GPU limited btw) that Intel consumes way more power for the same amount of work. If Cinebench and blender aren't enough, try all the other benchmarks which actually use the cores and see which CPU scales better when power normalized. Power consumption is one of the key reasons AMD are winning server contracts at the rate they are.

Also, the 13900K to 35W won't have the same scaling at simulation games, or games that actually use the CPU (Also X3D chips are usually much faster in those games anyway).

I'd say if we're arguing about which CPU scales better with power, a benchmark (Blender, CB, V-Ray, simulations etc) which actually utilize the CPU holds more candle vs games that don't come close to utilizing the CPU fully. You can argue that AMD and Intel are close in performance right now, but performance/power? Nope.
 
What I'm noticing with these benchmark reviews is how good the 5800X3D still is. Pure gamers shouldn't even consider the 7950X3D. Stay with the 5800X3D or wait for the 7800X3D.
I was reading the 7800X3D is likely better for most people since it’s homogenous and the 7950 doesn’t spool up right in some games. So you’re probably spot on.
 
Jesus Christ:
hub.png
That's pretty crazy if it winds up anywhere near true. Does anybody here play Satisfactory? I heard that the 5800X3D gave some really nice uplift in that game but never saw any actual numbers, and it's probably the most intensive game I have any interest in. If true, I'd love to see what the 7800X does there.
 
It's just ridiculous that Hardware Unboxed is the only reviewer that still benchmarks CPUs using a CPU-reliant game - and they only use one game! Factorio - that's it. So many simulation/tycoon/strategy games out there that are bogged down by CPU performance - and the reviewers ignore all of them in favor of what fits their typical BS narrative, "CPUs don't really matter anymore, put your money into this (way overpriced) GPU instead (that won't actually noticeably increase game performance - just graphics)."
 
It's just ridiculous that Hardware Unboxed is the only reviewer that still benchmarks CPUs using a CPU-reliant game - and they only use one game! Factorio - that's it. So many simulation/tycoon/strategy games out there that are bogged down by CPU performance - and the reviewers ignore all of them in favor of what fits their typical BS narrative, "CPUs don't really matter anymore, put your money into this (way overpriced) GPU instead (that won't actually noticeably increase game performance - just graphics)."
I'd honest to God love a SimCity 4 comparison, lol. That game is notoriously locked to a single core than can get really bogged down later in the game.
 
It looks like the 7000 series doesn't benefit quite as much from the extra cache but the results are still nice and the efficiency looks great. It looks like the 7800x3d will be the AM5 cpu to get for gaming and the 7950x is a better value for production but this generally gives you the best of both worlds for not too much more than the 7950x.

I don't really care for the asymmetrical design or especially that it requires software with updated profiles to work properly. I'm not a fan of Intel going big/little and this seems worse due to the extra software requirements and complexity involved in assigning the cores based on the type of task rather than just how demanding it is.

I'm going to think it over some but I'll probably just grab a 5800x3d since it would be a drop in upgrade for me and still isn't far behind the best in gaming. The only thing holding me back is I'll probably upgrade to 32GB of RAM and then I'll be a motherboard away from updating my secondary PC so I'll probably still end up buying just as many components however a DDR4 kit and b450 board would be a lot cheaper than a ddr5 kit and x670 board that I'd get if I made the jump to AM5.

On the Intel side the 13900k is still a top end gaming cpu and the 13600k is a really nice bang for the buck cpu but they're unlikely to have nearly as many upgrade options available as an AM5 system which is a big factor for me when everything else seems fairly equal.

i wouldn't say they don't benefit as much from the cache but more so that the architecture improvements have made it so the cpu's far less of a bottleneck then the 5k series was.

It's just ridiculous that Hardware Unboxed is the only reviewer that still benchmarks CPUs using a CPU-reliant game - and they only use one game! Factorio - that's it. So many simulation/tycoon/strategy games out there that are bogged down by CPU performance - and the reviewers ignore all of them in favor of what fits their typical BS narrative, "CPUs don't really matter anymore, put your money into this (way overpriced) GPU instead (that won't actually noticeably increase game performance - just graphics)."



simulation games are very hard benchmark properly due to the dynamic nature of most modern simulation games. if there's no way to do an in game rendered replay which is how most of them benchmark FS2020 you can't get apples to apples numbers so you have to do multiple run throughs to get a +-5% average which is still too large of a gap. while i definitely enjoy playing simulation/strategy games i can't think of a single one that's worth wasting benchmark time on and that includes FS2020.

I'd honest to God love a SimCity 4 comparison, lol. That game is notoriously locked to a single core than can get really bogged down later in the game.
it's a limitation of the game, doesn't matter what cpu you have you'll still hit the same problem and to do that would take forever to do in a benchmark scenario for a game that maybe 1000 people on a good day still play.
 
Cities Skylines and any of the Frontier Development games (Planet Coaster, Zoo, Jurassic Park Evolution, etc ) and even unassuming looking games like Kingdom and Castles all hit the CPU *hard* once you get a big city/park. I want to see those as benchmarks. They were the main gaming* reason I wanted to upgrade my 3900X to a 5950X, and it was worth it.
Also Flight Simulator, but I heard more recent patches helped with that. Hell, even No Man's Sky is weirdly CPU dependent. But that games performance changes every patch so who knows.
I also absolutely murder my CPU with JAVA Minecraft mods.
I have yet to get into satisfactory.

*I also do a lot of transcoding/rendering and that was a more linear improvement.

I've said in other threads that 1080p performance, or any gaming performance over like 200fps is bullshit and worthless in my opinion. There's a limit of "good enough" and I'm *never* gonna game at 1080p on my computer ever again. I just want to load up cities skylines and have that hit anywhere near 60fps for a damn change.
 
but wasn't AMD's flagship CPU always cheaper than Intel's flagship?
The AMD FX-62 was $1,031. Intel's top CPU at the time was the Pentium Extreme 965, which was $999. Just a couple months after the FX-62 released was when Intel launched the first products in the Core product line, using the Conroe microarchitecture. The top of that product line, the X6800, was also $999. AMD responded by pushing the Windsor architecture further with the FX-74 and FX-76, but couldn't make up the performance deficit. Both of those processors launched at the $999 price point to compete with Intel. With the launch of Phenom the following year, the period where AMD competed on price instead of raw performance began. Intel Core really disrupted the CPU market and led to their market dominance for the next decade.
 
it's a limitation of the game, doesn't matter what cpu you have you'll still hit the same problem and to do that would take forever to do in a benchmark scenario for a game that maybe 1000 people on a good day still play.
Yes yes, I know. It was just a "what if" type of wish, lol. Wouldn't want to get in the way of the millionth triple digit fps CS:GO benchmark at 720p.
 
Glad to see the performance looking so great; I sent back a 7950X a month or so ago knowing that the 3D version would show up in Feb rather than April-July as I was afraid. Not hating they released the top version first either; you can make a 7950X3D behave like a 7800X3D, but not vice versa. The pricing is also quite reasonable all things considered, being the same vs the 7950X at its launch (and both of them are both a bit cheaper than the "you pay $999+ for the top gaming chip be it HEDT or mainstream" that I was used to over the past decade or so.). The big concern I have is mostly about how fast Linux specifically will deal with the "cache vs max frequency, and sometimes crossing over" scheduler issues. I know that a lot of time and money is supposedly being selectively thrown at Windows drivers (and from what we saw with Intel P vs E core schedulers, focus on Windows 11 and not much earlier) but I don't want that to be the requirement to get the most out of the proc. Hopefully AMD and perhaps 3rd parties like Valve will go out of their way to contribute to improving the Linux performance of these X3D CPUs in gaming and across the board. Still, even without that its seemingly at least as good as the already excellent 7950X in most applications, so from the start I'm fairly confident I'll get overall solid performance looking at all of these factors.
 
simulation games are very hard benchmark properly due to the dynamic nature of most modern simulation games. if there's no way to do an in game rendered replay which is how most of them benchmark FS2020 you can't get apples to apples numbers so you have to do multiple run throughs to get a +-5% average which is still too large of a gap. while i definitely enjoy playing simulation/strategy games i can't think of a single one that's worth wasting benchmark time on and that includes FS2020.
I think it is more about difficulty than impossibility. Which just means the reviewers are complacent (at least the big ones that have time/money) - it's their job to do the difficult tasks involved with reviews. Some games allow extensive custom modding, map/scenario editing, save game editing, etc... Can create something that is as linear and scaled-up as possible while still simulating the calculations involved with a dynamic game.
 
  • Like
Reactions: ChadD
like this
It's just ridiculous that Hardware Unboxed is the only reviewer that still benchmarks CPUs using a CPU-reliant game - and they only use one game! Factorio - that's it. So many simulation/tycoon/strategy games out there that are bogged down by CPU performance - and the reviewers ignore all of them in favor of what fits their typical BS narrative, "CPUs don't really matter anymore, put your money into this (way overpriced) GPU instead (that won't actually noticeably increase game performance - just graphics)."
It's even more ridiculous when you read the comments and people are crying about 720[/1080p benchmarks. Like they can't grasp that you are trying to see how fast the cpu is relative to others with the the gpu limits removed. Like say a cpu is 40% faster than another in a low resolution but equal in 4k. You can bet a game will come along eventually that need that difference in performance and you will have an idea of how they will compare.
 
You can bet a game will come along eventually that need that difference in performance and you will have an idea of how they will compare.
That's cool and all, but can you really blame people for wanting to see some real world differences today? Again, reviewers should make more effort in showing off CPU heavy games instead of trotting out the same old FPS/1st person RPG spread.
 
It's just ridiculous that Hardware Unboxed is the only reviewer that still benchmarks CPUs using a CPU-reliant game - and they only use one game! Factorio - that's it. So many simulation/tycoon/strategy games out there that are bogged down by CPU performance - and the reviewers ignore all of them in favor of what fits their typical BS narrative, "CPUs don't really matter anymore, put your money into this (way overpriced) GPU instead (that won't actually noticeably increase game performance - just graphics)."
gamers nexus has ffxiv at least which does benefit from newer architectures and cache quite a bit.
 
gamers nexus has ffxiv at least which does benefit from newer architectures and cache quite a bit.
Did not notice it was an MMORPG - good to see something different than the single-player first-person benchmarks. But looks like the cache didn't help too much. 5800x3d is beat by the 7700x. And we can't really determine anything definitely with the 7950x3d because as Hardware Unboxed already showed, it appears the split design of the CPU cache reduces performance of the cache - and Gamers Nexus did not run any "simulated" cache-only tests with the non-cache cores disabled. Have to wait for the 7800x3d to see how the cache really fairs in these benchmarks.
 
I think it is more about difficulty than impossibility. Which just means the reviewers are complacent (at least the big ones that have time/money) - it's their job to do the difficult tasks involved with reviews. Some games allow extensive custom modding, map/scenario editing, save game editing, etc... Can create something that is as linear and scaled-up as possible while still simulating the calculations involved with a dynamic game.

It really shouldn't be any harder then playing a CPU intensive turn based game like Civilization.... saving a game at a point where there are many things to calculate. Load same save game up with different systems... hit next turn button use a stopwatch if you have too. A lot of people do play simulation games... when it comes to a CPU I would like to know how fast it is calculating a next turn. There are some games that do really take a bit to calculate every turn. They aren't sexy enough to grab you tube eyeballs I guess. I don't really care how many FPS civilization is running at any decent GPU will take care of that end. I want to know if I can look forward to late game 20s or 5s next turn clicks. I would have to think its exactly those types of games that would do well with massive cache.
 
It really shouldn't be any harder then playing a CPU intensive turn based game like Civilization.... saving a game at a point where there are many things to calculate. Load same save game up with different systems... hit next turn button use a stopwatch if you have too. A lot of people do play simulation games... when it comes to a CPU I would like to know how fast it is calculating a next turn. There are some games that do really take a bit to calculate every turn. They aren't sexy enough to grab you tube eyeballs I guess. I don't really care how many FPS civilization is running at any decent GPU will take care of that end. I want to know if I can look forward to late game 20s or 5s next turn clicks. I would have to think its exactly those types of games that would do well with massive cache.
That is one of the other things that drive me nuts. Benchmarks for games like Total War and Civilization that measure only in FPS! If you're not measuring turn time too, don't bother including those games at all.
 
  • Like
Reactions: ChadD
like this
It really shouldn't be any harder then playing a CPU intensive turn based game like Civilization.... saving a game at a point where there are many things to calculate. Load same save game up with different systems... hit next turn button use a stopwatch if you have too. A lot of people do play simulation games... when it comes to a CPU I would like to know how fast it is calculating a next turn. There are some games that do really take a bit to calculate every turn. They aren't sexy enough to grab you tube eyeballs I guess. I don't really care how many FPS civilization is running at any decent GPU will take care of that end. I want to know if I can look forward to late game 20s or 5s next turn clicks. I would have to think its exactly those types of games that would do well with massive cache.
I was thinking that. Is it really so hard to just load a savegame latter into a large sim game and see how end turn/ assets load? I get a bit tired of CPU benchmarks using almost the same exact metrics as GPU benchmamrks.
 
I was thinking that. Is it really so hard to just load a savegame latter into a large sim game and see how end turn/ assets load? I get a bit tired of CPU benchmarks using almost the same exact metrics as GPU benchmamrks.
I mean thinking about it I guess a reviewer could argue... well the game AI will do something a little different every run. However that should also be easily controlled for. I mean how long would it really take to do reload the game and hit next turn 10 times for each CPU and average them.
 
I think it would be funny if Intel called their "3D" vcache equivalent something like "VR".
 
AMD and intel aren't even close to equal on performance/power right now. There's a lot of data on the net, even on power normalized graphs outside of these Cinebench and AT's game benchmarks you showed (where its entirely GPU limited btw) that Intel consumes way more power for the same amount of work.
This man just said that in power normalized test intel consumes more power. I can't even make this shit up.

It's as if you have no clue what a power normalized test is, holy shit.
 
This man just said that in power normalized test intel consumes more power. I can't even make this shit up.

It's as if you have no clue what a power normalized test is, holy shit.
Cool, latch on to the mistake I made and ignore the rest of my comment, or all the other people who corrected you with your argument that AMD and Intel have the same power/performance. Let me correct myself, on power normalized graphs, intel has way less performance for the same power. Or vice versa, you know, because it works both ways anyway.

Admit to your mistake and move on. Stop gasping at straws.
 
This man just said that in power normalized test intel consumes more power. I can't even make this shit up.

It's as if you have no clue what a power normalized test is, holy shit.
I mean lets be honest here. Doesn't matter what benchmark anyone finds. Intel uses more power than AMD. Some test are better than others, but there is no dancing around the fact that Intel sucks at power/performance compared to AMD.
 
Back
Top