AMD Confirms Zen 5 will Get Ryzen 8000 Series Branding, "Navi 3.5" Graphics in 2024Z

https://www.tomshardware.com/news/nvidia-to-sell-550000-h100-compute-gpus-in-2023-report

"While we don't know the precise mix of GPUs sold, each Nvidia H100 80GB HBM2E compute GPU add-in-card (14,592 CUDA cores, 26 FP64 TFLOPS, 1,513 FP16 TFLOPS) retails for around $30,000 in the U.S. However, this is not the company's highest-performing Hopper architecture-based part. In fact, this is the cheapest one, at least for now. Meanwhile in China, one such card can cost as much as $70,000."

CDW was reportedly selling them for $30,000 (discontinued as of Nov 2023)

Microsoft is getting a discount so there cost is $10,000 for each MI300X.
Yeah, I just looked at my CDW list and it's coming in at $33,000 CAD, also as back ordered and discontinued.

We'll see if the part gets reinstated at their new facilities in Mexico once they have the H200 production up and running but I doubt they will.

Microsoft has a lot of leverage on AMD for pricing, Microsoft recently spun up their own Maia AI accelerators in their data centers, and while they are slower than the MI300, they are cheap and are still fast enough for many of the canned workloads the existing customer base is already doing so the MI300 is competing against them in that respect.

Maia is built on TSMC 5nm, and has strong TOPS and FLOPS, but was designed before the LLM explosion (it takes ~3 years to develop, fab, and test an ASIC). It is massive, with 105 B transistors (vs. 80B in H100). It cranks out 1600 TFLOPS of MXInt8 and 3200 TFLOPS of MXFP4. Its most significant deficit is that it only has 64 GB of HBM but a ton of SRAM. Ie. It looks like it is designed for older AI models like CNNs. Microsoft went with only four stacks of HBM instead of 6 like Nvidia and 8 like AMD. The second generation Memory bandwidth is 1.6 TB/s, which beats out AWS Trainium/Inferentia at 820 GB/s and is well under NVIDIA, which has 2x3.9 TB/s.

But the Maia accelerators are functioning in GPT 3.5, CoPilot, and Bing.
 
But you shouldn’t, you shouldn’t buy them at all, because the H200 will be out and available before you get your H100 and by the time you have it installed and running you would be looking at the B100 numbers and asking yourself why you paid for the H100 at all as it’s 1/4’th the speed of the new parts.
Looking how much the 4 years old A100 (80gb seem to go for $16k-17k, 7k-8k for the 40GB) seem to have kept its value, if you can get msrp H100 at any time and you can use them, not sure if it is ever that bad of a buy. It would be quite the assumption for a lot of customer that they will be able to buy non-scalped B100 near launch. Always a risk obviously, but years of H100 did not kill the A100 yet, could take a long time for the B100 to hurt the H100 to go by a large amount below msrp.
 
I just wish AMD would increase core counts. Intel is really taking them to task in productivity.

Or at least slide their name branding up one peg. The world is ready for an 8-core Ryzen 5 and a 12-core Ryzen 7.
They have 8 core and 12 core Ryzens... know something that will blow your mind? They even have wiat for it... 16 core Ryzens that you can buy right now and put in your AM4 or AM5 mobos :nailbiting:
 
I just wish AMD would increase core counts. Intel is really taking them to task in productivity.

Or at least slide their name branding up one peg. The world is ready for an 8-core Ryzen 5 and a 12-core Ryzen 7.
Or maybe Intel needs more then 8 performance cores?
 
All three will get incredibly rich. Intel still needs to get it together, but just look at amd and nvidia's stock
 
AMD smokes Intel on the server side, especially in core counts, but the average user just does not need that many cores. Might look good on synthetic scores but I downgraded from a 5900X to a 5800X3D as I just did not benefit from the higher core count compared to the faster single threaded speed of the 5800X3D. At the current price of fab costs I doubt will be seeing a 8 core Ryzen 5.

Yeah but AMD has more or less been on the 6/8 core for Ryzen 5/7 since the first Ryzen CPUs. I think it is time they start moving the core count up.
 
As far as Zen5, I'm curious what improvement it will bring but isn't it a bit of an issue with 8000 branding considering that won't some Zen4 chips also have it, notably the new APUs with the AI "accelerator" onboard ? Would have been better to swap to 9000.. Also if they're going to wait until the end of 2024 to release it, and then dick around until 2025 for the X3D that's even more annoying. I wish they'd just release the 3D cache chips at the same time as the others, and preferably launch ALL high end chips first. I can HOPE that they'll have a 8950X3D with both additional cache on all CCDs AND full clock speed by now instead of the heterogeneous 7950X3D with the cache cores being attenuated in speed somewhat and the scheduler issues at launch. I'm also curious about the next generation X770E style chipsets; it would be nice if Threadripper is going full on workstation/jr-server core-heavy mode, to allow an option for a board to have quad channel memory, more PCI-E lane etc...somewhere in the spec. AMD must keep their eyes open given Intel's 13/14th series and their upcoming competitor but so far they're doing pretty well over all.

The "Navi 3.5" however thing concerns me. I really want to support AMD when it comes to GPUs, but if they go through the rest of 2024 and into 2025 no less with slightly refined RDNA3 (or as Marees mentioned, it will have some new features at least) I worry unless they can really stick both the performance and the price. I don't know when Nvidia is planning to come out with their next generation, but if that happens at the end of 2024 or early 2025 then it will be even worse in comparison if RDNA4 isn't there with improved RT, AI focused/capable hardware etc. Its not that RDNA3 is bad and I am glad for their FOSS drivers on LInux (though annoyed that licensing makes HDMI 2.1 support not work last I checked) and that their 7900XTX and 7900XT are highly performant for the price, but the weaker value of some other cards like as discussed in the OP has undermined what should have been the place AMD could shine. If the RDNA 3.5 stuff fixes some of these issues and has good price/performance and features but comes as a stopgap similar to the Nvidia "Super" cards without delaying the true next generation, that could be good. However, if it takes up resources that could be better spent preparing for competitors from Nvidia that would be less desirable. Nvidia does proprietary garbage behavior at the best of times, but when the feel that they don't have any real competition from AMD it gets much worse. AMD has great potential but they have to deal with som of their issues to make the most of their stepping forward, on both CPUs and GPUs.
 
As far as Zen5, I'm curious what improvement it will bring but isn't it a bit of an issue with 8000 branding considering that won't some Zen4 chips also have it, notably the new APUs with the AI "accelerator" onboard ? Would have been better to swap to 9000.. Also if they're going to wait until the end of 2024 to release it, and then dick around until 2025 for the X3D that's even more annoying. I wish they'd just release the 3D cache chips at the same time as the others, and preferably launch ALL high end chips first. I can HOPE that they'll have a 8950X3D with both additional cache on all CCDs AND full clock speed by now instead of the heterogeneous 7950X3D with the cache cores being attenuated in speed somewhat and the scheduler issues at launch. I'm also curious about the next generation X770E style chipsets; it would be nice if Threadripper is going full on workstation/jr-server core-heavy mode, to allow an option for a board to have quad channel memory, more PCI-E lane etc...somewhere in the spec. AMD must keep their eyes open given Intel's 13/14th series and their upcoming competitor but so far they're doing pretty well over all.

The "Navi 3.5" however thing concerns me. I really want to support AMD when it comes to GPUs, but if they go through the rest of 2024 and into 2025 no less with slightly refined RDNA3 (or as Marees mentioned, it will have some new features at least) I worry unless they can really stick both the performance and the price. I don't know when Nvidia is planning to come out with their next generation, but if that happens at the end of 2024 or early 2025 then it will be even worse in comparison if RDNA4 isn't there with improved RT, AI focused/capable hardware etc. Its not that RDNA3 is bad and I am glad for their FOSS drivers on LInux (though annoyed that licensing makes HDMI 2.1 support not work last I checked) and that their 7900XTX and 7900XT are highly performant for the price, but the weaker value of some other cards like as discussed in the OP has undermined what should have been the place AMD could shine. If the RDNA 3.5 stuff fixes some of these issues and has good price/performance and features but comes as a stopgap similar to the Nvidia "Super" cards without delaying the true next generation, that could be good. However, if it takes up resources that could be better spent preparing for competitors from Nvidia that would be less desirable. Nvidia does proprietary garbage behavior at the best of times, but when the feel that they don't have any real competition from AMD it gets much worse. AMD has great potential but they have to deal with som of their issues to make the most of their stepping forward, on both CPUs and GPUs.
Per my understanding RDNA 3.5 is purely for integrated graphics in APUs

RDNA 4 is for discrete graphics cards to be released in 6 to 9 months time frame
 
Glad I went ahead with my 7800X3D then. Will wait and see how those turn out.
I'm still rolling with my 4790K. I refuse to get anything other than 16 cores on the same CCD, since current gen consoles are 16 threads.
 
As far as Zen5, I'm curious what improvement it will bring but isn't it a bit of an issue with 8000 branding considering that won't some Zen4 chips also have it, notably the new APUs with the AI "accelerator" onboard ? Would have been better to swap to 9000.. Also if they're going to wait until the end of 2024 to release it, and then dick around until 2025 for the X3D that's even more annoying. I wish they'd just release the 3D cache chips at the same time as the others, and preferably launch ALL high end chips first. I can HOPE that they'll have a 8950X3D with both additional cache on all CCDs AND full clock speed by now instead of the heterogeneous 7950X3D with the cache cores being attenuated in speed somewhat and the scheduler issues at launch. I'm also curious about the next generation X770E style chipsets; it would be nice if Threadripper is going full on workstation/jr-server core-heavy mode, to allow an option for a board to have quad channel memory, more PCI-E lane etc...somewhere in the spec. AMD must keep their eyes open given Intel's 13/14th series and their upcoming competitor but so far they're doing pretty well over all.

The "Navi 3.5" however thing concerns me. I really want to support AMD when it comes to GPUs, but if they go through the rest of 2024 and into 2025 no less with slightly refined RDNA3 (or as Marees mentioned, it will have some new features at least) I worry unless they can really stick both the performance and the price. I don't know when Nvidia is planning to come out with their next generation, but if that happens at the end of 2024 or early 2025 then it will be even worse in comparison if RDNA4 isn't there with improved RT, AI focused/capable hardware etc. Its not that RDNA3 is bad and I am glad for their FOSS drivers on LInux (though annoyed that licensing makes HDMI 2.1 support not work last I checked) and that their 7900XTX and 7900XT are highly performant for the price, but the weaker value of some other cards like as discussed in the OP has undermined what should have been the place AMD could shine. If the RDNA 3.5 stuff fixes some of these issues and has good price/performance and features but comes as a stopgap similar to the Nvidia "Super" cards without delaying the true next generation, that could be good. However, if it takes up resources that could be better spent preparing for competitors from Nvidia that would be less desirable. Nvidia does proprietary garbage behavior at the best of times, but when the feel that they don't have any real competition from AMD it gets much worse. AMD has great potential but they have to deal with som of their issues to make the most of their stepping forward, on both CPUs and GPUs.
In case you were TL;DR articles written about it, the cache can't handle the heat; hence the lower clock speed and the ingenious way to partially address that issue with 3D cache on one CCD and not the other CCD. Perhaps you should do less whining and go do something about it by becomeing a CPU engineer and solve those issues and then see the back end of things that we average joes don't see of money, parts supply, time constraints, investors, can't break the laws of physics, etc. and just enjoy what massive CPU power you have to pick from as a consumer that we didn't have years ago. If I have a few thousand laying around I can get a HEDT system (AMD of course) that could rival servers of the mid-late 2000's that I definitely could not afford at that time. Too many whiners and not enough figuring out how to solve the things they deem to be problems.
 
I'm still rolling with my 4790K. I refuse to get anything other than 16 cores on the same CCD, since current gen consoles are 16 threads.
That logic doesn't make sense. The custom Zen 2 chips in the Playstation and Xbox Series X/S are 8 core/16 thread. Meaning, 2 CCDs.

https://www.windowscentral.com/xbox-series-x-specs

"
CategoryXbox Series X
Processor8x Cores @ 3.8 GHz (3.66 GHz w/ SMT) Custom Zen 2 CPU
Graphics12.155 TFLOPS, 52 CUs @ 1.825 GHz Custom RDNA 2 GPU
Die Size360.45 mm2
Process7nm Enhanced
 
Also if they're going to wait until the end of 2024 to release it, and then dick around until 2025 for the X3D that's even more annoying. I wish they'd just release the 3D cache chips at the same time as the others, and preferably launch ALL high end chips first.
I'd prefer their V-Cache to be standard on all their chips since it clearly has benefits. AMD is trying to milk FOMO by releasing X3D later on.
I'm also curious about the next generation X770E style chipsets; it would be nice if Threadripper is going full on workstation/jr-server core-heavy mode, to allow an option for a board to have quad channel memory, more PCI-E lane etc...somewhere in the spec. AMD must keep their eyes open given Intel's 13/14th series and their upcoming competitor but so far they're doing pretty well over all.
The benefit to quad channel memory is more for built in graphics, which AMD seems to either charge a lot more for or doesn't put adequate amount. CPU's aren't very memory bandwidth limited anyway, unless you go to Threadripper amount of cores.
AMD has great potential but they have to deal with som of their issues to make the most of their stepping forward, on both CPUs and GPUs.
As long as AMD catches up to Nvidia's Ray-Tracing performance I think they'll do fine. That and AMD needs to start moving their GPU's to 5nm or even 3nm at this point. AMD's CPU problem is their chipet cost, as motherboards are still rather pricey. AMD needs to stop making chipsets like the A620 which doesn't allow overclocking. That shit needs to stop.
Yeah but AMD has more or less been on the 6/8 core for Ryzen 5/7 since the first Ryzen CPUs. I think it is time they start moving the core count up.
I feel that 6/8 cores is still fine. Unless games and applications can make good use of those extra cores, I'd rather they focus on IPC. There's no way that AMD is going to add cores without increasing the price. AMD can't even put their 780M GPU in their CPUs without a dramatic price increase.
 
Yeah but AMD has more or less been on the 6/8 core for Ryzen 5/7 since the first Ryzen CPUs. I think it is time they start moving the core count up.

I say software needs to catch up first, most programs I use just do not leverage that many cores. Otherwise we wind paying more for something we really do not use.
 
There still a market for 4 core-8 thread imo, sure more core and less dollar could be nice and that part of the line-up where intel has an good argument right now, but not a big issue, people that love core has a good offer from them.
 
I feel that 6/8 cores is still fine. Unless games and applications can make good use of those extra cores, I'd rather they focus on IPC. There's no way that AMD is going to add cores without increasing the price. AMD can't even put their 780M GPU in their CPUs without a dramatic price increase.

For games, but for some things like Handbrake there seems to be a decent performance increase.

https://tpucdn.com/review/amd-ryzen-9-7950x/images/encode-h265.png

Compare the 7900X to the 7700X. There isn't much of a clock difference between the two, although I am not sure if the 7900X has something else the 7700X lacks. I too would like to see IPC rise further. But it seems like we're in the era where Intel keep the core count the same for nearly a decade again.
 
From a reliable leaker:

https://twitter.com/Kepler_L2/status/1665759888490758147?s=20

RDNA 3.5/gfx11.5 is, as the name implies, an in-between gen with features from both RDNA3 and 4.

It contains the new RDNA4 SALU with support for FP32 instructions and improvements to the geometry engine, but not other RDNA4 features like the new scheduler and improved RT cores.
Analysis of RDNA 3.5 vs RDNA 4 features revealed via LLVM compiler changes

https://chipsandcheese.com/2024/02/04/amd-rdna-3-5s-llvm-changes/
 
I'm still rolling with my 4790K. I refuse to get anything other than 16 cores on the same CCD, since current gen consoles are 16 threads.
I'm on that too, it's a hot piece of garbage... But gets me by overclocked at 4.8 Ghz. You're super not doing yourself a favor not upgrading though
 
That logic doesn't make sense. The custom Zen 2 chips in the Playstation and Xbox Series X/S are 8 core/16 thread. Meaning, 2 CCDs.

https://www.windowscentral.com/xbox-series-x-specs

"
CategoryXbox Series X
Processor8x Cores @ 3.8 GHz (3.66 GHz w/ SMT) Custom Zen 2 CPU
Graphics12.155 TFLOPS, 52 CUs @ 1.825 GHz Custom RDNA 2 GPU
Die Size360.45 mm2
Process7nm Enhanced

They are custom monolithic APUs.
 
For games, but for some things like Handbrake there seems to be a decent performance increase.
Yes people that do a lot of handbrake stuff and for who it is important for them to go fast and for who GPU quality is not good enough should go for higher core count, but that not necessarily that many people.

AMD should continue to sell CPU that use a single CCD number of cores and once that number shift to 12, then it will be time to offer 10-12 cores in the mid bracket sku imo and has 8 core will then be your bad yield up the 6 core option to 8, 8->10 etc..., that technological aspect should be what decide for now, because 6-8 core is anyway enough for many customers. The arrow lake rumors of no HT pointing to that, Intel offer so many cores now and their data make them feel that 14 threads is enough for many users, which ring true to me.

With how good and small those zenC core seem to be, more and more it look like will go toward a world where more cpu space is dedicated to special hardware a la Apple soc (for better and for worst)
 
Last edited:
With how good and small those zenC core seem to be, more and more it look like will go toward a world where more cpu space is dedicated to special hardware a la Apple soc (for better and for worst)
I’m not super sold on them yet, there is a reason 6T gates are primarily used on low voltage mobile chips. Yeah they are smaller, and use significantly less energy but they also clock significantly lower than 8T gates and they have a single instruction input not 2 which has an adverse effect on Hyper Threading.
It’s a far larger architectural change than most give it credit for. From my understanding it is far more suited to roles filled by the existing AMD Embedded lineup than the desktop and mobile lineups.
 
Per my understanding RDNA 3.5 is purely for integrated graphics in APUs

RDNA 4 is for discrete graphics cards to be released in 6 to 9 months time frame
Ah, mayhap I overlooked that. I thought that the hypothetical "7950XTX" referenced to compete more heavily with the 4090 would be part of RDNA3.5, using said improvements to eke out more performance.

In case you were TL;DR articles written about it, the cache can't handle the heat; hence the lower clock speed and the ingenious way to partially address that issue with 3D cache on one CCD and not the other CCD. Perhaps you should do less whining and go do something about it by becomeing a CPU engineer and solve those issues and then see the back end of things that we average joes don't see of money, parts supply, time constraints, investors, can't break the laws of physics, etc. and just enjoy what massive CPU power you have to pick from as a consumer that we didn't have years ago. If I have a few thousand laying around I can get a HEDT system (AMD of course) that could rival servers of the mid-late 2000's that I definitely could not afford at that time. Too many whiners and not enough figuring out how to solve the things they deem to be problems.

The issue has been known for several generations now, since the 5800X3D. Either by incremental improvements in the process and materials available or movements to new ones entirely, I don't think its unreasonable to see a solution to this. Also, I'd feel that the AMD solution was more "ingenious" if they would have included an on-die feature like Intel did with their own P+E core chips, or at very least launch with a comprehensive chip firmware, mobo chipset governance, and OS scheduler platform to help get the most out of the design, plus if necessary an ideally open source platform independent profiler with profiles for many common games and applications for core preference frequency vs cache , among other tweaks. The fact the launched their top end chip (which was and remains good hardware when everything goes right) where the only guidance was "If Windows Game Bar says X is a game, then push it to the cache cores first" is unacceptable

Not that bumps in the road weren't expected trying something new - Intel had them too with the P and E core implementation since 12th gen - but Intel minimized the issues because of on-die usage monitoring + thread director updates at the OS level. As much as I like to support AMD when they do something good (like libre Linux GPU drivers) they fuck up as well and a fanboy level defense that nobody should criticize them unless they own their own chip fab that does it better isn't helpful; its even worse when its a "please thank you for profit company for allowing me to consume product" suggestion. Technology progresses but that doesn't mean there aren't issues along the way, much the same way that I won't stop criticizing Nvidia for their constant proprietary standards and lockdowns, massive increases in prices and other issues just because they offer the most powerful GPU at the moment. For AMD, if the issue with cache is going to continue (and they won't consider other layout changes) then it would be nice to have the firmware/chipset/software set up to deal with this chip design so that user at least have a good chance of their system making use of the proper cores for any given workload in the proper way and an easy way for those who tweak to set things as they wish without a lot of proprietary 3rd party tools. Previous discussions of the issue suggested one reason AMD didn't go through the same degree was because they were expecting to be done with these issues entirely with future chips and that the current compromise would be a temporary awkwardness for multiple CCX 3D cache chips before returning to a symetrical arrngement in a generation or two where high end chips had the same additional cache on all CCX. If that's not their plan, then I can only hope the above goes forward so that users aren't thrown into the deep end of manual pinning or forced to rely on Windows Game Bar to handle things for them.

I'd prefer their V-Cache to be standard on all their chips since it clearly has benefits. AMD is trying to milk FOMO by releasing X3D later on.
Agreed, that would be ideal but as discussed above even on highperformance chips they for the moment need to choose between frequency/TDP and cache but hopefully this will be evolved beyond in the next generations one way or another. As far as FOMO yeah, that's my issue I'd rather have them release the standard and X3D at the same time rather than have to wait several months.
The benefit to quad channel memory is more for built in graphics, which AMD seems to either charge a lot more for or doesn't put adequate amount. CPU's aren't very memory bandwidth limited anyway, unless you go to Threadripper amount of cores.
Eh, especially with AM5 and the like and infinity fabric there's a benefit to memory speed and bandwidth for AMD platforms. Not to mention stuff like higher core counts, but in any cases it would be nice to have more PCI-E lanes as a whole as well as the option for quad-channel memory on high end Ryzen layouts if thy're going to make Threadripper basically "Server/workstation high core count focused, also massivly expensive " as opposed to "Enthusiast all around" like it used to be in the earlier days for both Intel and AMD. It wouldn't be that every chipset or subtype needed to be quad channel, but if you're buying something equivalent to a top tier X670E board it really should be, as well as variable support levels for PCI-E 5.x channels based on the chipset and CPU you picked.
As long as AMD catches up to Nvidia's Ray-Tracing performance I think they'll do fine. That and AMD needs to start moving their GPU's to 5nm or even 3nm at this point. AMD's CPU problem is their chipet cost, as motherboards are still rather pricey. AMD needs to stop making chipsets like the A620 which doesn't allow overclocking. That shit needs to stop.
I guess we won't see that until next generation, RDNA4. Any revamped RDNA3.5 will, as good as it could be not likely to have greater RT / AI capabilities and so with NV continuing to focus on RT/AI as the primarily thing that matters since it allows them to maintain a lead even when AMD has equal or perhaps greater price/performance rasterization. Unfortunately with the AI new hyperfocus, AMD really needs to make up some ground to have simiar performance levels as well as ensure their open APIs for things like AI like ROCm and others are as easy and widespread as CUDA at least outside of those alrady locked into that ecosystem. In any case, I can ony hope they are able to push out cards to compete both in raw performance and value with the NV lineup and shore up their weak links, especially a prices for GPUs as a whole continue to be dragged upward. I agree they need to sort out a wider variety of chipset costs while maintaining quality and features, which should be easier to do with the next generation.
 
The issue has been known for several generations now, since the 5800X3D. Either by incremental improvements in the process and materials available or movements to new ones entirely, I don't think its unreasonable to see a solution to this.
This is a TSMC advanced packaging issue.

TSMC designed and developed the direct silicon via for use with mobile SOCs in mind they never designed it for desktops and servers. The connection uses a copper based solder that is applied directly between the silicon layers. That solder needs to have a stable melting point that is below the temperature that would damage the silicon itself.
TSMC has made advancements to the technology, they have gradually tweaked the formula and they have managed to reduce the resistance a significant amount which allows for higher clocks and more voltage but the thermal limit doesn’t change much maybe an extra degree or two. But as that solder layer is under the top most layers of silicon it doesn’t get direct contact with the spreader or the heat sync so it gets hot quick.

There are alternative methods that can be used that TSMC and AMD have available to them but those processes would price the CPU out of the consumer market.

This is one of the features that Intel is pushing for its new foundry services and the Adamantine Cache, which isn’t as fast but makes it up with volume and a significantly cheaper cost.
 
I'd prefer their V-Cache to be standard on all their chips since it clearly has benefits. AMD is trying to milk FOMO by releasing X3D later on.
I would as well but TSMC doesn’t have the capacity to support it. The stacked cache is a seperate chip produced on a different node that is then physically lined up ontop of the CCD and soldered together. It’s not a fast process, AMD can’t afford to buy that much time at TSMC to do full runs of it.
 
The issue has been known for several generations now, since the 5800X3D. Either by incremental improvements in the process and materials available or movements to new ones entirely, I don't think its unreasonable to see a solution to this. Also, I'd feel that the AMD solution was more "ingenious" if they would have included an on-die feature like Intel did with their own P+E core chips, or at very least launch with a comprehensive chip firmware, mobo chipset governance, and OS scheduler platform to help get the most out of the design, plus if necessary an ideally open source platform independent profiler with profiles for many common games and applications for core preference frequency vs cache , among other tweaks. The fact the launched their top end chip (which was and remains good hardware when everything goes right) where the only guidance was "If Windows Game Bar says X is a game, then push it to the cache cores first" is unacceptable

Not that bumps in the road weren't expected trying something new - Intel had them too with the P and E core implementation since 12th gen - but Intel minimized the issues because of on-die usage monitoring + thread director updates at the OS level. As much as I like to support AMD when they do something good (like libre Linux GPU drivers) they fuck up as well and a fanboy level defense that nobody should criticize them unless they own their own chip fab that does it better isn't helpful; its even worse when its a "please thank you for profit company for allowing me to consume product" suggestion. Technology progresses but that doesn't mean there aren't issues along the way, much the same way that I won't stop criticizing Nvidia for their constant proprietary standards and lockdowns, massive increases in prices and other issues just because they offer the most powerful GPU at the moment. For AMD, if the issue with cache is going to continue (and they won't consider other layout changes) then it would be nice to have the firmware/chipset/software set up to deal with this chip design so that user at least have a good chance of their system making use of the proper cores for any given workload in the proper way and an easy way for those who tweak to set things as they wish without a lot of proprietary 3rd party tools. Previous discussions of the issue suggested one reason AMD didn't go through the same degree was because they were expecting to be done with these issues entirely with future chips and that the current compromise would be a temporary awkwardness for multiple CCX 3D cache chips before returning to a symetrical arrngement in a generation or two where high end chips had the same additional cache on all CCX. If that's not their plan, then I can only hope the above goes forward so that users aren't thrown into the deep end of manual pinning or forced to rely on Windows Game Bar to handle things for them.
All that verbiage to prove my point. Go design a CPU and find out how hard it can be. Same thing I say to the nvidia whiners when a newly released game doesn't support DLSS or the 4000 series wasn't what they thought it should have been, blah, blah, blah. AMD has made improvements. Intel is on 14 gen, up from 12 gen. AMD is on 2nd gen 3D cache. Maybe the next iteration of it will have improvements that address some of the shortcomings. Nvidia sticks to proprietary standards because it locks people in. AMD isn't doing that so much, not that they are perfect either. Implementing fixes to an OS to ustilize the growing number of differences with CPU designs is an uphill battle so CPU manufacturers have to work with MS to get OS level awareness rather than cheap workarounds. All of that requires a lot of work and companies working together. The reason I blast people so hard over this stuff at times is because they don't seem to get how hard it can be to work out all the issues that companies face. Not only that, if you are going to complain so much go do something about it. Problems won't solve themselves. AMD doesn't have the R&D budget that Intel had, but that is changing as more companies throw money at them for Server chips and AI chips (high margin hardware). They came back from being on the verge of cardiac arrest about 8 years ago and they are putting out very competitive hardware. They demolished Intel in the HEDT market and continue to make inroads in the Enterprise/server market. 3D cache is a small thing compared to everything else they are doing. Not everyone or every program can take advantage of extra cache anyway.
 
As far as Zen5, I'm curious what improvement it will bring but isn't it a bit of an issue with 8000 branding considering that won't some Zen4 chips also have it, notably the new APUs with the AI "accelerator" onboard ? Would have been better to swap to 9000..
Agreed. Confusing naming has been a problem with AMD for virtually all of Zen's existence. Especially with mobile. You buy a CPU with a name scheme which makes it sound like a new arch, but it can actually be last gen. Intel also started doing that with many of 13th gen's non-k parts.

Also if they're going to wait until the end of 2024 to release it, and then dick around until 2025 for the X3D that's even more annoying. I wish they'd just release the 3D cache chips at the same time as the others, and preferably launch ALL high end chips first.
its unlikely they will change their release model, unless Intel does something to force them to do that. As mentioned previously in the thread: FOMO purchases are pretty common in the hardware market. A regular Zen 5 chip should be generally as good or better than a 7800X3D, in most games. And will be strictly better in everyting else. That's a compelling purchase for a lot of people. Another benefit of staggering some releases, is time to bin good and bad chips, for certain SKU's.

I can HOPE that they'll have a 8950X3D with both additional cache on all CCDs
Incredibly unlikely.
1. it would be expensive to make, expensive to buy, and would likely have slim margins if they wanted to sell it for an even close to reasonable price. And, its also pointless. As there is virtually no great benefit to V-cache on both CCD, for V-Cache's primary customer: gamers.

I'd prefer their V-Cache to be standard on all their chips since it clearly has benefits. AMD is trying to milk FOMO by releasing X3D later on.
2. The cache is really only good for gaming, emulation, and a couple of other very specific use cases (which probably aren't worth catering to. At least, not with really expensive cache). Most other computing is unaffected by the Cache or only very marginally of benefit. There are some other edge cases, of course.
Additionally on gaming: V-cache is meant for squeezing every last frame. You can still game very very well, on a non-V-Cache CPU. It wouldn't make sense to put V-Cache on every CPU, since it otherwise does still cost more to make. AMD has to think about margins. And customers should have relatively lower priced, decent options.

Further on the topic of a dual CCD Processor where both CCD's have their own V-Cache: Gaming and emulation don't really benefit from more cores. There are very few games which scale appreciably beyond 8 cores. And for gaming, the cache tends to supersede any extra performance from the higher SKUs with more cores and higher clockspeed. Emulation doesn't really scale beyond 6 cores in most cases. So.....there really isn't much point in offering Zen as we know it, in a dual V-cache CCD setup. The point of the product is no compromise. You can game the best on it and you can do CPU based workloads the best on it. The gaming can be wrangled with some software/driver/firmware based logic. Or it can be done manually by the user, with software or firmware settings. A dual V-Cache CCD CPU would be a nearly pointless purchase for gamers, which is the primary target buyer for V-Cache.

Indeed, AMD should come up with a better solution to make sure that games and emulation always prioritize a V-Cache CCD, without manual input from the user. But, there are ways to manually make sure of it. So, its not like its a fake product or something.

The best route, would be for AMD to figure out a way for the V-Cache to be unified/shared between both CCD (however, it may not be possible). That way, if a game or app does really need an extra core beyond 8, your penalty for the extra CCD use would only be relatively as bad as a regular Zen chip with dual CCD. Which is to say, an almost invisible penalty for gaming, as of Zen 4's design.

I'm still rolling with my 4790K. I refuse to get anything other than 16 cores on the same CCD, since current gen consoles are 16 threads.

That logic doesn't make sense. The custom Zen 2 chips in the Playstation and Xbox Series X/S are 8 core/16 thread. Meaning, 2 CCDs.
The CPU parts of the APUs in PS5 and XSX are 8 core, single CCD.

1. 16 cores in general, are unused by the vast majority of games. Because most games don't even scale well past 6 threads. This is why a 7600x in most games, performs as well as 7700x/12900k/etc, ignoring clockspeed differences. A few games do scale decently on 8. And a scarce few actually scale appreciably with more than 8----and there might be only a couple of games which actually scale measurably, past 12 cores. I'm thinking maybe turn time in Civilization. However, a 12 Vs. 16 core in turn time, while measurable, would be purely academic. Rather than enriching the player's experience.

2. 6 and 8 core CPUs with V-cache, generally outperform 12 and 16 core CPUs which otherwise have extra clockspeed, etc.

3. A dedicated gaming machine would be wasting cost to have a 16 core CPU with the cores dedicated to any potential thread. Cost would better be diverted to V-Cache or something similar. A gamer is better off buying a 7800X3D than a 7950x. Or even a 7700X rather than a 7950x. And that is part of the brilliance of AMD's product segmentation.
 
Last edited:
The best route, would be for AMD to figure out a way for the V-Cache to be unified/shared between both CCD (however, it may not be possible). That way, if a game or app does really need an extra core beyond 8, your penalty for the extra CCD use would only be relatively as bad as a regular Zen chip with dual CCD. Which is to say, an almost invisible penalty for gaming, as of Zen 4's design.
That is the core concept behind the Intel Adamantine cache. In AMD’s case it’s not possible to share it as it is physically soldered to the top of the CCD. AMD is essentially clamshelling cache instead of DDR/GDDR.

Cache here makes up for the latency between the GPU and system memory, we’re approaching a point where AMDs threads spend more time waiting for their turn at the memory channels than they do retrieving the memory so having it in cache reduces trips to memory and results in better times.
 
That logic doesn't make sense. The custom Zen 2 chips in the Playstation and Xbox Series X/S are 8 core/16 thread. Meaning, 2 CCDs.

https://www.windowscentral.com/xbox-series-x-specs

"
CategoryXbox Series X
Processor8x Cores @ 3.8 GHz (3.66 GHz w/ SMT) Custom Zen 2 CPU
Graphics12.155 TFLOPS, 52 CUs @ 1.825 GHz Custom RDNA 2 GPU
Die Size360.45 mm2
Process7nm Enhanced
Fact is still that all cores on a single CCD is the key for guaranteed microstutter-free performance.

I want all the threads, but without risk for stuttering.
 
Fact is still that all cores on a single CCD is the key for guaranteed microstutter-free performance.

I want all the threads, but without risk for stuttering.
So you are still using a 4790K and trying to say that you won't accept any less than 16 physical cores per CCD is still nonsense. As chameleoneel pointed out, most games don't scale past 6-8 cores anyway. Micro stutter can be a problem, not necessarily due to CPU cores on separate dies though. There are multiple variables that can contribute to micro-stutter, AMD's 3D cache could alleviate some cases of micro stutters by reduce cache misses and keeping CPU fed.
 
I was tempted to do a complete system upgrade from my 5800X (non-3D cache version)...had all the parts picked out...was going to get the 7800X3D but decided to hold off for Zen 5 as I hear great things about it (huge IPC improvements)...it's not too far off (May/June) so it makes more sense to wait
 
Last edited:
So you are still using a 4790K and trying to say that you won't accept any less than 16 physical cores per CCD is still nonsense. As chameleoneel pointed out, most games don't scale past 6-8 cores anyway. Micro stutter can be a problem, not necessarily due to CPU cores on separate dies though. There are multiple variables that can contribute to micro-stutter, AMD's 3D cache could alleviate some cases of micro stutters by reduce cache misses and keeping CPU fed.
Counter-measures for the stutter caused by CCD jumping are relying totally on software like Xbox Game Bar. Some have to resort to core parking and other shenanigans.

That's just something that I need to check off the list. And with the PS4 generation being dropped out from productions currently, I'd say scaling past 6-8 cores is just a matter of time.
 
I was tempted to do a complete system upgrade from my 5800X (non-3D cache version)...had all the parts picked out...was going to get the 7800X3D but decided to hold off for Zen 5 as I hear great things about it (huge IPC improvements)...it's not too far off (May/June) so it makes more sense to wait

Or you can get it and use it now, and wait for the new X3D part (maybe a year later) and upgrade to that when it comes out. That is the nice thing about AMD. AM5 will be around for a while.
 
And with the PS4 generation being dropped out from productions currently, I'd say scaling past 6-8 cores is just a matter of time.
Not sure thinking in cores instead of threads-task work for modern game engine, the PS3 had what a bizarre 9 parallel execution/thread at the same time, PS4 had 8 cores, the PS5 has 8 cores I am not sure how much it will change things in that very regard, core count on console did not move much in the last 15 years. The increase we see is software-game engine getting better that would have regardless, would they still be on a PS4 level of 8 core cpu maybe they would currently even better at using 8 core than now, being forced to extract every possible execution cycle.

I am not so sure core matter more than raw multithread performance for much of anything (except for VM machine load or high security where you can want to lock process to particular non share core maybe), strong 4 core still beat weaker 8 core in game that are able to use 8 core.
 
Last edited:
Counter-measures for the stutter caused by CCD jumping are relying totally on software like Xbox Game Bar. Some have to resort to core parking and other shenanigans.

That's just something that I need to check off the list. And with the PS4 generation being dropped out from productions currently, I'd say scaling past 6-8 cores is just a matter of time.
There's also the "emphasize cache" setting in the BIOS.

Heh, PS5 reserves 1.5 cores for its OS/system functions. Xbox does similar.
 
2. The cache is really only good for gaming, emulation, and a couple of other very specific use cases (which probably aren't worth catering to. At least, not with really expensive cache). Most other computing is unaffected by the Cache or only very marginally of benefit. There are some other edge cases, of course.
That really depends on the work load. According to Phoronix, V-Cache does seem to help in a number of situations. AVX-512 doesn't help in many works loads but we still have it.
https://www.phoronix.com/review/amd-ryzen-7-7800x3d-linux/3
Additionally on gaming: V-cache is meant for squeezing every last frame. You can still game very very well, on a non-V-Cache CPU. It wouldn't make sense to put V-Cache on every CPU, since it otherwise does still cost more to make. AMD has to think about margins. And customers should have relatively lower priced, decent options.
It increases IPC, but mostly for logic. You're trying to avoid RAM with cache, which is much slower. This is why something like Cinebench doesn't show any improvements since cache doesn't help. Things like video rendering, editing, and image processing is not going to benefit from V-Cache.
The best route, would be for AMD to figure out a way for the V-Cache to be unified/shared between both CCD (however, it may not be possible). That way, if a game or app does really need an extra core beyond 8, your penalty for the extra CCD use would only be relatively as bad as a regular Zen chip with dual CCD. Which is to say, an almost invisible penalty for gaming, as of Zen 4's design.
I'm sure AMD is making improvements, but much like AVX-512 I feel that V-Cache should be standard at some point. It may even help with power consumption since an unfortunate result of the V-Cache is lower clocks. If the V-Cache can make up the performance difference for the lost clock speed then it maybe a good on power consumption. Plus V-Cache could act as a sort of eDRAM to maybe even speed up built in graphics. There's a lot that could be done if this V-Cache could be utilized better.
 
Back
Top