AMD Confirms Zen 5 will Get Ryzen 8000 Series Branding, "Navi 3.5" Graphics in 2024Z

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,900
Zen 5 and Navi 3.5 coming

"Another major disclosure is the very first mention of "Navi 3.5" This implies an incremental to the "Navi 3.0" generation (Radeon RX 7000 series, RDNA3 graphics architecture), which could even be a series-wide die-shrink to a new foundry node such as TSMC 4 nm, or even 3 nm; which scoops up headroom to dial up clock speeds. AMD probably finds its current GPU product stack in a bit of a mess. While the "Navi 31" is able to compete with NVIDIA's high-end SKUs such as the RTX 4080, and the the company expected to release slightly faster RX 7950 series to have a shot at the RTX 4090; the company's performance-segment, and mid-range GPUs may have wildly missed their performance targets to prove competitive against NVIDIA's AD104-based RTX 4070 series, and AD106-based RTX 4060 series; with its recently announced RX 7600 being based on older 6 nm foundry tech, and performing a segment lower than the RTX 4060 Ti."

1685973701956.png

Source: https://www.techpowerup.com/309632/...000-series-branding-navi-3-5-graphics-in-2024
 
Cool stuff coming from AMD. Keep pushing Intel and Nvidia!! Competition means lower prices and that makes everyone happy
 
Curious if zen 6 will also be AM5 so we get 3 gens out of the platform
October/November timeframe, leakers said AMD was still doing internal discussion about this. A new platform would allow them to better leverage new features and architectural technologies. And staying on AM5 would of course benefit consumers and probably be cheaper for partners, etc.
However, late December leaks suggested they had decided Zen 6 would be a new platform. I haven't heard an update on that, but I also haven't been really paying attention for the past couple weeks. Except that I've heard very recently that Zen 6 is now being talked about as a refinement of Zen 5. Whereas October/November, it was talked about as seemingly a much bigger redesign, to make it feel more like a monolithic design. Even though it will still be chiplets. Could still be the same, even though the language has seemingly changed.

I would hunch and say that if they do go ahead with a new socket for Zen 6----they may still be able to offer a crossover product on AM5. That could be a Zen 6 core in an AM5 compatible package (and AM5 compatible chiplets for I/O etc), with caveats that maybe some newer features won't work. Or it could be a Zen 5+, where they use the newest silicon improvements to add clockspeed to Zen 5 or something.

I did hear they are trying to have Zen 5 work well with DDR5 8000.
 
Will this be a "fixed" RDNA 3 🤔
I wouldn't place money on it, my understanding is those issues are systemic of the TSMC packaging options and the latency it generates and not with the silicon itself, I don't think they can fix that with RDNA 3.x without spending a lot more for a different packaging configuration which would price it out of the market.
 
Maybe not pushing very hard in gaming GPU space, but starting to push in the AI Enterprise space where the deep pockets are at!
I'm not so sure about that either, a box containing 8 MI300x units is price comparable to an Nvidia H100 unit of a similar configuration, the catch here is that H100 is 12-14x faster in the current LLMs, so you will get upwards of 10x the output in the same amount of time at the same power draw.
All this while the MI300x is only starting to ship, Nvidia is already prepping to ship out the replacement, the H200, which is nearly 2x faster than the H100, and they will be shipping the B100s before the end of the year which could be another 2x performance over the H200.

AMD can talk a big AI game sure, but I have yet to see so much of an inkling of them delivering anything but PowerPoint promises.

AMD for their presentations keeps comparing themselves to Nvidia at FP16 workloads, which if you are doing things at FP16, then yes AMD is faster based on how they have configured the platform, but most LLMs are doing FP8, at which time the memory configuration is a nonfactor and it is the onboard cache that takes over and Nvidia there has an overwhelming lead.
 
Last edited:
I'm not so sure about that either, a box containing 8 MI300x units is price comparable to an Nvidia H100 unit of a similar configuration, the catch here is that H100 is 12-14x faster in the current LLMs, so you will get upwards of 10x the output in the same amount of time at the same power draw.
All this while the MI300x is only starting to ship, Nvidia is already prepping to ship out the replacement, the H200, which is nearly 2x faster than the H100, and they will be shipping the B100s before the end of the year which could be another 2x performance over the H200.

AMD can talk a big AI game sure, but I have yet to see so much of an inkling of them delivering anything but PowerPoint promises.

AMD for their presentations keeps comparing themselves to Nvidia at FP16 workloads, which if you are doing things at FP16, then yes AMD is faster based on how they have configured the platform, but most LLMs are doing FP8, at which time the memory configuration is a nonfactor and it is the onboard cache that takes over and Nvidia there has an overwhelming lead.
AMD has some big-name companies lined up to grab an alternative to the locked down ecosystem being pushed by Nvidia with CUDA. Even if AMD is the AI underdog supplying cheaper AI solutions, they can still gain a sizeable market share if they are more affordable. Not all companies have the cash to pony up for the uber expensive Nvidia hardware. It sounds liek you need to get up to speed with the latest information.

https://www.techspot.com/news/101238-amd-mi300x-ai-accelerator-faster-than-nvidia-h100.html

https://community.amd.com/t5/instin...ms-and-industry-leading-inference/ba-p/652304
 
Last edited:
AMD has some big-name companies lined up to grab an alternative to the locked down ecosystem being pushed by Nvidia with CUDA. Even if AMD is the AI underdog supplying cheaper AI solutions, they can still gain a sizeable market share if they are more affordable. Not all companies have the cash to pony up for the uber expensive Nvidia hardware. It sounds liek you need to get up to speed with the latest information.

https://www.techspot.com/news/101238-amd-mi300x-ai-accelerator-faster-than-nvidia-h100.html

https://community.amd.com/t5/instin...ms-and-industry-leading-inference/ba-p/652304
I'm sure that AMD is working on things, there is too much money at stake for them not to be, lots of companies don't like being locked down to CUDA, but it's not like you have to run CUDA code on Nvidia you can run all the other open language models just as well on Nvidia hardware as you can on AMD hardware, but Nvidia makes it a loosing battle because the CUDA libraries are just so clean, well documented, and efficient, Nvidia makes it so easy to use CUDA that you have to seriously ask if there is a cost-benefit to not using them.
The problem here is AMD's MI300x solution is not that much cheaper, we are talking 10-15% cheaper, if that, because demand on the AMD hardware is somewhat higher than normal currently because it isn't banned in China (yet).
Quick note though with those slides AMD is sharing around, notice they don't comare outpuit at Nvidia's FP8 to their FP16, they show latency, but not output, because while AMD does technically have the lower latency it is also doing 1/14'th the output.

The H100 platform hits some serious memory limitations when doing work at FP16, not architecture ones, memory, which is why one of the big changes to the H200 platform is to double the memory, which then brings them back to being faster than the MI300x platform at FP16 and still significantly faster in FP8.

AMD sort of rigged their benchmark, doing a single batch compared to a single batch, fair let's take this one workload and compare it to one identical workload on the competitor's hardware. That is Apples to Apples and in that case, AMD clearly shows themselves as being faster, but nobody uses hardware that way in the wild, you combine queries and submit them as a batch, which further increases latency but allows you to process 10 or more jobs at once, so sure the person typing their question in may have to wait a full extra second for the response, but you have processed 10 questions at once instead of a single one.

Not saying the AMD solution is garbage, but they are very very far behind and they need all the big-named partners they can get their hands on because AMD's benchmarks do not live up to real-world usage at the moment, but their current solution is great if you are looking at researching something new, but it's not like Nvidia isn't doing the same.

And also remember the H100 is the old part, the MI300x is facing up against the H200, and the B100 so depending on how you want to measure things they are either 1 or 2 generations behind.
 
Cool stuff coming from AMD. Keep pushing Intel and Nvidia!! Competition means lower prices and that makes everyone happy
Except AMD isn't really a threat to NVIDIA. AMD reacts to NVIDIA more than the other way around.
They're pushing Intel for sure. Nvidia not so much.
Exactly. Something close to performance parity is really required to push the competition.
 
Glad I went ahead with my 7800X3D then. Will wait and see how those turn out.

I'm seriously considering getting a 7800X3D as well to match with my new 4080 Super...I would have to build a whole new system but it might be worth it
 
I'm seriously considering getting a 7800X3D as well to match with my new 4080 Super...I would have to build a whole new system but it might be worth it
Wait. While My 7800X3D build is fantastic, the X670E mobos leave something to be desired. I'd wait until we have more info on mobos built with the X7xx chipsets before having to build a whole new rig.
 
Wait. While My 7800X3D build is fantastic, the X670E mobos leave something to be desired. I'd wait until we have more info on mobos built with the X7xx chipsets before having to build a whole new rig.
Nothing about the AMD boards this gen has inspired me. More than happy to watch this one from the benches.
 
Wait. While My 7800X3D build is fantastic, the X670E mobos leave something to be desired. I'd wait until we have more info on mobos built with the X7xx chipsets before having to build a whole new rig.

700 chipsets?...is that coming before Zen 5?...I always prefer to jump into AMD platforms once the chipset/platform is mature...AMD is more prone to having issues at the start...weren't there some issues with overheating or voltage shortly after the 600 chipsets/Zen 4 were released?...Zen 5 might have some issues at launch

at least now with B650E and X670E all the issues have been ironed out
 
700 chipsets?...is that coming before Zen 5?...I always prefer to jump into AMD platforms once the chipset/platform is mature...AMD is more prone to having issues at the start...weren't there some issues with overheating or voltage shortly after the 600 chipsets/Zen 4 were released?...Zen 5 might have some issues at launch

at least now with B650E and X670E all the issues have been ironed out
I would encourage you to read some threads about the mobos. the PCI layouts are pretty week and they still suffer from stupid slow bootup times. You have a pretty capable computer currently so waiting 3-4 months (Im guessing mid to late spring is when we should hear about the next round of mobos with the new chipset) really isn't a big deal. I upgraded in Sept because my 6700K was really showing its age, otherwise I would have waited to see what the X7xx chipset mobos offered.
 
I would encourage you to read some threads about the mobos. the PCI layouts are pretty week and they still suffer from stupid slow bootup times. You have a pretty capable computer currently so waiting 3-4 months (Im guessing mid to late spring is when we should hear about the next round of mobos with the new chipset) really isn't a big deal. I upgraded in Sept because my 6700K was really showing its age, otherwise I would have waited to see what the X7xx chipset mobos offered.

I heard about the slow boot times, I thought it was a BIOS thing that was fixed...I also read that enabling 'Memory Context Restore' in the BIOS significantly speeds up boot times on AM5 boards...are slow boot times part of AM5 or is it motherboard specific?
 
Last edited:
I heard about the slow boot times, I thought it was a BIOS thing that was fixed...I also read that disabling 'Memory Context Restore' in BIOS significantly speeds up boot times on AM5 boards...are slow boot times part of AM5 or is it motherboard specific?
It’s a grab bag.
 
I heard about the slow boot times, I thought it was a BIOS thing that was fixed...I also read that disabling 'Memory Context Restore' in BIOS significantly speeds up boot times on AM5 boards...are slow boot times part of AM5 or is it motherboard specific?

It’s a grab bag.
Overall, Intel since 12th gen and AMD Zen 4, have slower boot times* than people were previously used to. That said, my Asrock AM5 board is totally fine, even without Memory Context Restore turned on. Its only like an 8 second delay to post.

*As Lakados said, it is a bit of a grab bag. Some boards have noticeably faster boot times than others. Even boards from within the same brand.

I think that overall, it may be something to do with DDR5. Because Summer 2022 I did a build for some kids with an Intel B660 motherboard with 32Gb (4 sticks of 8Gb) of DDR4, and it booted very quickly. Much faster than the three DDR5 boards I have used for 12th and 13th gen Intel. (one was a Z690 and the others were B660 and B760). I even re-used a 12th gen CPU in that DDR4 build, which I had previously used in the Z690 board.
 
Overall, Intel since 12th gen and AMD Zen 4, have slower boot times* than people were previously used to.

*As Lakados said, it is a bit of a grab bag. Some boards have noticeably faster boot times than others. Even boards from within the same brand.

I think that overall, it may be something to do with DDR5. Because Summer 2022 I did a build for some kids with an Intel B660 motherboard with 32Gb (4 sticks of 8Gb) of DDR4, and it booted very quickly. Much faster than the three DDR5 boards I have used for 12th and 13th gen Intel. (one was a Z690 and the others were B660 and B760). I even re-used a 12th gen CPU in that DDR4 build, which I had previously used in the Z690 board.

I leave my PC on most of the time so faster boot times would be nice but not a dealbreaker...as long as it only effects the boot and not anything within Windows after boot
 
I leave my PC on most of the time so faster boot times would be nice but not a dealbreaker...as long as it only effects the boot and not anything within Windows after boot
All the new CPUs are fantastic when in actual use. I bought 12700k at launch and it impressed me all the time. 13600k was similarly great.

AM5 was a mess at launch, so I abandoned it. But I returned in October, when I was able to pricematch 7800X3D at BestBuy, for $320. Its been fantastic in the Asrock ITX board I have it in.
 
I just wish AMD would increase core counts. Intel is really taking them to task in productivity.

Or at least slide their name branding up one peg. The world is ready for an 8-core Ryzen 5 and a 12-core Ryzen 7.
 
I just wish AMD would increase core counts. Intel is really taking them to task in productivity.

Or at least slide their name branding up one peg. The world is ready for an 8-core Ryzen 5 and a 12-core Ryzen 7.
What does Intel have that outdoes a 7950X?
 
What does Intel have that outdoes a 7950X?
At 100% load single task nothing.

But if your an office user who has a web browser open with too many tabs, a dozen PDF’s, a couple of excel sheets, and a few word documents, with teams running and has a web meeting happening than Intel takes the lead. The Intel rig will feel snappier and more responsive.
If you are using single heavy apps then AMD all day.
But the average user isn’t pinning much from either team with 8 cores or more.
 
At 100% load single task nothing.

But if your an office user who has a web browser open with too many tabs, a dozen PDF’s, a couple of excel sheets, and a few word documents, with teams running and has a web meeting happening than Intel takes the lead. The Intel rig will feel snappier and more responsive.
If you are using single heavy apps then AMD all day.
But the average user isn’t pinning much from either team with 8 cores or more.

Put a NVME drive in and I guarantee neither one will feel any different. Intel or AMD run office software pretty much the same and those types of machine are often handicapped by the memory installed.
 
I'm sure that AMD is working on things, there is too much money at stake for them not to be, lots of companies don't like being locked down to CUDA, but it's not like you have to run CUDA code on Nvidia you can run all the other open language models just as well on Nvidia hardware as you can on AMD hardware, but Nvidia makes it a loosing battle because the CUDA libraries are just so clean, well documented, and efficient, Nvidia makes it so easy to use CUDA that you have to seriously ask if there is a cost-benefit to not using them.
The problem here is AMD's MI300x solution is not that much cheaper, we are talking 10-15% cheaper, if that, because demand on the AMD hardware is somewhat higher than normal currently because it isn't banned in China (yet).
Quick note though with those slides AMD is sharing around, notice they don't comare outpuit at Nvidia's FP8 to their FP16, they show latency, but not output, because while AMD does technically have the lower latency it is also doing 1/14'th the output.

The H100 platform hits some serious memory limitations when doing work at FP16, not architecture ones, memory, which is why one of the big changes to the H200 platform is to double the memory, which then brings them back to being faster than the MI300x platform at FP16 and still significantly faster in FP8.

AMD sort of rigged their benchmark, doing a single batch compared to a single batch, fair let's take this one workload and compare it to one identical workload on the competitor's hardware. That is Apples to Apples and in that case, AMD clearly shows themselves as being faster, but nobody uses hardware that way in the wild, you combine queries and submit them as a batch, which further increases latency but allows you to process 10 or more jobs at once, so sure the person typing their question in may have to wait a full extra second for the response, but you have processed 10 questions at once instead of a single one.

Not saying the AMD solution is garbage, but they are very very far behind and they need all the big-named partners they can get their hands on because AMD's benchmarks do not live up to real-world usage at the moment, but their current solution is great if you are looking at researching something new, but it's not like Nvidia isn't doing the same.

And also remember the H100 is the old part, the MI300x is facing up against the H200, and the B100 so depending on how you want to measure things they are either 1 or 2 generations behind.
Your 10-15% cheaper is waaaaaaaaaaaaaaaaaaaaaaaay off

https://www.tomshardware.com/tech-i...ce-nvidias-h100-has-peaked-beyond-dollar40000
 
Last edited:
I just wish AMD would increase core counts. Intel is really taking them to task in productivity.

Or at least slide their name branding up one peg. The world is ready for an 8-core Ryzen 5 and a 12-core Ryzen 7.
AMD smokes Intel on the server side, especially in core counts, but the average user just does not need that many cores. Might look good on synthetic scores but I downgraded from a 5900X to a 5800X3D as I just did not benefit from the higher core count compared to the faster single threaded speed of the 5800X3D. At the current price of fab costs I doubt will be seeing a 8 core Ryzen 5.
 
Put a NVME drive in and I guarantee neither one will feel any different. Intel or AMD run office software pretty much the same and those types of machine are often handicapped by the memory installed.
They all run NVME and just about every user notices the Intel ones feel better. But I don’t have enough 7000 series or 14’th gen parts to say it’s still an issue there. But between 12’th gen and the 5000 series Intel comes out ahead as far as accounting and secretarial is concerned.
 
I'm most interested in running Distributed Computing projects on my host while also using it for other things. If I'm not gaming my GPU is running Folding@Home and I'm almost always running 28 threads of something like Universe@Home. I think a 7950 is the best solution for my use case, but since I haven't been following the intel 14 series super close, so I was willing to be surprised.
 
Not sure where Tom’s pulled their supposed H100 pricing from, because it looks like it was EBay, so are they seriously comparing OEM pricing for AMD to EBay scalpers for Nvidia???
Because the H100 from an OEM I can reasonably say is between $20,000 and $25,000 USD if you can wait the 6 month lead time for them (which is half). But you shouldn’t, you shouldn’t buy them at all, because the H200 will be out and available before you get your H100 and by the time you have it installed and running you would be looking at the B100 numbers and asking yourself why you paid for the H100 at all as it’s 1/4’th the speed of the new parts.

Still the MI300 coming in at $15,000 on the high end is better than I expected. We’ll see how long it stays there once it enters general availability and the Chinese companies get a chance to purchase them as they could be viable for anyone there who couldn’t legally get the H100.
 
Not sure where Tom’s pulled their supposed H100 pricing from, because it looks like it was EBay, so are they seriously comparing OEM pricing for AMD to EBay scalpers for Nvidia???
Because the H100 from an OEM I can reasonably say is between $20,000 and $25,000 USD if you can wait the 6 month lead time for them (which is half). But you shouldn’t, you shouldn’t buy them at all, because the H200 will be out and available before you get your H100 and by the time you have it installed and running you would be looking at the B100 numbers and asking yourself why you paid for the H100 at all as it’s 1/4’th the speed of the new parts.

Still the MI300 coming in at $15,000 on the high end is better than I expected. We’ll see how long it stays there once it enters general availability and the Chinese companies get a chance to purchase them as they could be viable for anyone there who couldn’t legally get the H100.
https://www.tomshardware.com/news/nvidia-to-sell-550000-h100-compute-gpus-in-2023-report

"While we don't know the precise mix of GPUs sold, each Nvidia H100 80GB HBM2E compute GPU add-in-card (14,592 CUDA cores, 26 FP64 TFLOPS, 1,513 FP16 TFLOPS) retails for around $30,000 in the U.S. However, this is not the company's highest-performing Hopper architecture-based part. In fact, this is the cheapest one, at least for now. Meanwhile in China, one such card can cost as much as $70,000."

CDW was reportedly selling them for $30,000 (discontinued as of Nov 2023)

Microsoft is getting a discount so their cost is $10,000 for each MI300X.
 
Last edited:
Back
Top