AMD Zen Rumours Point to Earlier Than Expected Release

I dont care if it costs the same as the flagship intels, as long as it performs the same. What Im getting at is that if they dont release a CPU that fits what I need then I wont be jumping on board with Zen. But Im hoping they do. I too keep seeing proclamations of destroying Intel, and that whole argument, what i want to know is where im getting that 30% IPC over my current processor, and could care less about brand wars. In fact if AMD release a CPU thats say, on par with the hypothetical 7700k kaby lake, (which i would hope offers me this increase from my 3960x), and it costs the same to build or maybe even a few dollars more, i would buy the AMD just because I would prefer to help the little man.

So its not about beating Intel, its coming to the table with a CPU that fits my needs, namely better than SBE, but it would seem that is a long shot. Me referring back to the Fury x is expressing my legitimate disappointment in a product i was told was supposed to be the best, when it clearly wasnt.

First no one with any real knowledge is expecting Kaby performance. Haswell IPC is expected, or at least somewhere near that. If it gets to Kaby then great but I have yet to see any reputable proof that it will.
 
First no one with any real knowledge is expecting Kaby performance. Haswell IPC is expected, or at least somewhere near that. If it gets to Kaby then great but I have yet to see any reputable proof that it will.

I'm expecting Haswell level PERFORMANCE, since I'm expecting a reduction in clocks due to the process node AMD is using.
 
Haswell level would be more than i was led to believe, hopefully that comes to fruit. I have no expectation of Kaby performance, but that is roughly what I need. I hope in a few years it will be a level playing field. I'd really like to see them have a win and give the market some competition.
 
I'm an old school AMD fanboy. I'm telling you guys right now Haswell performance is being REALLY optimistic. There's simply no way AMD can achieve that at this point, IMO. Sandy Lake, maybe Ivy bridge per core if they're lucky.
 
It might depend on the workload being tested. Zen could destroy Haswell in memory bandwidth bound scenarios if it has HBM. With HSA anything parallel could be the same. Regardless, even if they don't beat them in all CPU tests, having that better integrated GPU (if that's the case) might be worth it. If they take a nano, shrink it to half the size because of the node change, and start optimizing for power, that will be one heck of an APU for a lot of tasks. The rumors we've seen so far seem to indicate they're doing something like this.
 
It might depend on the workload being tested. Zen could destroy Haswell in memory bandwidth bound scenarios if it has HBM. With HSA anything parallel could be the same. Regardless, even if they don't beat them in all CPU tests, having that better integrated GPU (if that's the case) might be worth it. If they take a nano, shrink it to half the size because of the node change, and start optimizing for power, that will be one heck of an APU for a lot of tasks. The rumors we've seen so far seem to indicate they're doing something like this.

I doubt that there will be any Zen-based products outside of an Opteron HPC part sporting HBM in the near future.

That has to be one hell of a shrink to get Fiji down to a size where it could be integrated into an APU.
 
I doubt that there will be any Zen-based products outside of an Opteron HPC part sporting HBM in the near future.

That has to be one hell of a shrink to get Fiji down to a size where it could be integrated into an APU.
It wouldn't necessarily have to be a full figi. Cut it in half with the node change (~300mm2), add 4 stacks of HBM, and then put a little Zen core on there as well. That doesn't leave you with 3000+ pins because of the interposer. It's definitely a performance part, but I wouldn't consider that otherwise unreasonable. No different than taking a current figi mcm and sticking it into a socket. Even if they did 8 channels of DDR4 that's going to be quite a few pins and a big socket.
 
I hope they can come out with an APU that can match a 380 or 970 with CPU performance at Sandys level. Probably wouldnt buy it for myself, but i can see my 11yr olds first gaming PC having one. And i can see good times ahead for the discrete market.

HBM to kill the RAM market and APU to kill the discrete GPU market, well kill is a strong word but in the mainstream market this surely has to have Intel and Nvidia shitting. Especially since Nvidia cant make CPUs. I wonder if we will see Intel CPUs with integrated Nvidia chips some day.....
 
Mainstream CPUs are a ways out before we see primary system memory integrated. Intel isn't going to be worried about AMD.
 
Mainstream CPUs are a ways out before we see primary system memory integrated. Intel isn't going to be worried about AMD.

But then again AMD is pretty desperate, wouldnt be a surprise if they had a go at it.
 
I wonder if we will see Intel CPUs with integrated Nvidia chips some day.....

No.
For the same reason you wont see a Mustang with a corvette engine come to market.
Intel HAS their own GPUs integrated. That would be cutting off their nose to spite their face.
 
But then again AMD is pretty desperate, wouldnt be a surprise if they had a go at it.

The problem is RAM is large, power hungry, and several orders of magnitude more expensive to produce then the TTLs that make up CPU cache. At that point, AMD would be left selling a $600 SoC, which no one will be willing to purchase.
 
Hey I still get 60 fps in all my games. ;) I wish I had USB 3.1 though. I still have USB 3.0 so I guess it's not so bad over here. There are new 990FX chipsets with USB 3.1

How many ppl even have 3.1 devices here?
 
How many ppl even have 3.1 devices here?
Hell I still don't have any 3.0s. The only usb devices I have hooked up are mouse, keyboard, Xbox360 controller, mic, and I think that is it. The one flash drive I have is 2.0 I am sure.
 
I love the last line in that video.
"If any company in the world can mess this up, it's AMD"

Yeah and he has good reasons to put it that way. Even tho he lists most of the stuff that has not helped AMD before as well.
Where he goes of in speculating is the 16 cpu part which in itself is interesting also the part where he says that AMD signed a deal with Samsung for their fabs.

How many ppl even have 3.1 devices here?

Kingston hyperx savage usb stick :) goes like hell :)
 
I doubt you see a consumer 16-core part, simply due to the limitations of the AM4 platform. Highest level for general consumers is probably the 8-core with SMT.
 
They won't wanna sell people on more cores, it has failed once and they must have learnt their lesson. IPC is way more important than cores, and that's the only place they want to improve.
 
They won't wanna sell people on more cores, it has failed once and they must have learnt their lesson. IPC is way more important than cores, and that's the only place they want to improve.
Now that DX12 and Vulkan are here then I would have to say NO!
To offset this comment is that consoles are using 6 or 7 cores now for their high end games which means that however you want to spin it gaming should benefit greatly from more cores now more then ever.
16 core desktop cpu this year don't think so either but the idea that you can get there is better then the limping along with IPC as your champion which did what exactly for computing needs on our desktop ?

It gave people in the gaming industry a complacent attitude settling with the DX9-11 performance and just not being able to move forward even if the graphics cards were still increasing the cpu was not doing anything more then couple percentage if you used a new compiler :) .

If you look at Battlefield 4 on the PS4 you can not say that IPC is king ....
 
Last edited:
Now that DX12 and Vulkan are here then I would have to say NO!
To offset this comment is that consoles are using 6 or 7 cores now for their high end games which means that however you want to spin it gaming should benefit greatly from more cores now more then ever.
16 core desktop cpu this year don't think so either but the idea that you can get there is better then the limping along with IPC as your champion which did what exactly for computing needs on our desktop ?

It gave people in the gaming industry and complacent attitude settling with the DX9-11 performance and just not being able to move forward even if the graphics cards were still increasing the cpu was not doing anything more then couple percentage if you used a new compiler :) .

If you look at Battlefield 4 on the PS4 you can not say that IPC is king ....
I could definitely see high end 8 core CPUs being nice. 16 will probably be an option, but there aren't a lot of desktop applications for that kind of performance. Not to mention the additional hardware(memory and interconnects) to feed all of them. The only downside I see to the more cores at this time is the added compute capabilities of DX12/Vulkan. Just how many more cores do you need if you're offloading the heavy lifting to a GPU or even APU? At this point it's hard not to expect most games to be leveraging hardware accelerated physics.
 
I could definitely see high end 8 core CPUs being nice. 16 will probably be an option, but there aren't a lot of desktop applications for that kind of performance. Not to mention the additional hardware(memory and interconnects) to feed all of them. The only downside I see to the more cores at this time is the added compute capabilities of DX12/Vulkan. Just how many more cores do you need if you're offloading the heavy lifting to a GPU or even APU? At this point it's hard not to expect most games to be leveraging hardware accelerated physics.

How smart are programmers really, only one way to find out :) .
 


Long winded speculation with the key parts at the latter part of the video.


Very cool video. I am really hoping Zen is something worthwhile. Ive been putting off my rig upgrade waiting for it. I agree with the video that Zen doesnt have to be faster than Intel, it just has to be close and then priced competitively and we could get some desperately needed competition injected into the CPU market. I just cant imagine Zen will be as big a flop as some of AMD's passed abortions. Surely Keller was able to get things back on track over there. I dont expect him to knock it out of the park like he did last time but Im pretty confident he'd be able to get them at least back up to the plate and competitive again.
 
They won't wanna sell people on more cores, it has failed once and they must have learnt their lesson. IPC is way more important than cores, and that's the only place they want to improve.

Agreed. There are certain problems that are more natively tasked to multithreading/SMT/SMD, and there are a lot of problems that tend to be rather sequential or I/O dependent. For the latter, single core horsepower is best. Especially in the consumer space.
 
Agreed. There are certain problems that are more natively tasked to multithreading/SMT/SMD, and there are a lot of problems that tend to be rather sequential or I/O dependent. For the latter, single core horsepower is best. Especially in the consumer space.

Sadly that single core horsepower is a snail.
 
Now that DX12 and Vulkan are here then I would have to say NO!
To offset this comment is that consoles are using 6 or 7 cores now for their high end games which means that however you want to spin it gaming should benefit greatly from more cores now more then ever.
16 core desktop cpu this year don't think so either but the idea that you can get there is better then the limping along with IPC as your champion which did what exactly for computing needs on our desktop ?

It gave people in the gaming industry a complacent attitude settling with the DX9-11 performance and just not being able to move forward even if the graphics cards were still increasing the cpu was not doing anything more then couple percentage if you used a new compiler :) .

If you look at Battlefield 4 on the PS4 you can not say that IPC is king ....

The reason consoles have to use more cores is because their CPUs are cripples by modern comparisons. The XB1's CPU is about as powerful as the 360's was, and in the case of the PS4, the CPU is a downgrade over the PS3s CPU. There's a reason several developers (Ubisoft and CDPR) complained very early on about performance limitations. It's also this reason why consoles still tend to target 900p/60, or even 900p/30, since the CPUs end up being the primary performance bottleneck.

On PCs, which typically have more powerful CPUs, this is a non-factor. GPUs are almost always the bottleneck, so you gain nothing from increasing CPU performance. Theres ZERO benefit to increasing CPU power when another component is the limiting performance factor.

When is Zen coming out?.... this year?

According to AMD, limited availability in Q4, assuming no delays.
 
The reason consoles have to use more cores is because their CPUs are cripples by modern comparisons. The XB1's CPU is about as powerful as the 360's was, and in the case of the PS4, the CPU is a downgrade over the PS3s CPU. There's a reason several developers (Ubisoft and CDPR) complained very early on about performance limitations. It's also this reason why consoles still tend to target 900p/60, or even 900p/30, since the CPUs end up being the primary performance bottleneck.
On PCs, which typically have more powerful CPUs, this is a non-factor. GPUs are almost always the bottleneck, so you gain nothing from increasing CPU performance. Theres ZERO benefit to increasing CPU power when another component is the limiting performance factor.

Hence my comment about Battlefield 4 on the ps4 it shows that if you know what you are doing then you can get performance out of "more cores". Your comment also acknowledges that developers have become complacent rather then good at their jobs.
That means that all of the game developers that do not know how to use the current architecture really have no business being in this business in the first place (yes I have offended many many people now and I'm not to worried about it either ...) . The console business used to be brutal where you needed to know all of the ins and outs of the hardware to get "anywhere" really the days where NES Genesis required the skill set to operate.

It is even easier now since you can use the base X86 code for PC and console products.
 
Well that sucks. So much for "earlier than expected release".

Yeah well there is this thing called rumour mill and you fell for it ;) . Supposedly in March we will see the AM4 mainboards introduced or released. This has not been confirmed anywhere then you get the ball rolling ZEN will come out real soon and from that people who can't put it in perspective will assume that it is so.
The same people who are thinking that there will be a consumer 32 core chip ;) . When the whole presentation was about server related products which will not see 2016 but in 2017..

If you look at certain aspects then you know it takes some time when "they" reported ZEN had tape out that means that it will still take a good time for something to roll onto the consumer market. Before launch you will see substantial leaks and that will build up before release when you see people in the press not talking about it due to NDA.

So the time you can worry about ZEN is at the end of the year with bad luck next year ....
 
Hence my comment about Battlefield 4 on the ps4 it shows that if you know what you are doing then you can get performance out of "more cores". Your comment also acknowledges that developers have become complacent rather then good at their jobs.
That means that all of the game developers that do not know how to use the current architecture really have no business being in this business in the first place (yes I have offended many many people now and I'm not to worried about it either ...) . The console business used to be brutal where you needed to know all of the ins and outs of the hardware to get "anywhere" really the days where NES Genesis required the skill set to operate.

It is even easier now since you can use the base X86 code for PC and console products.

You don't really get how Software works, do you?

First and foremost: no one manually plops in X86 opcodes anymore; most release builds just plop the -O2 switch and call it a day. And the whole "Its X86 so it's easier to port" nonsense is exactly that; everything is coded at a higher level, and with the exception of platform specific APIs, code is interchangeable between different HW platforms.

Secondly, embedded systems are different the general purpose PCs. On embedded systems, you have ONE HW specification you can code to, which allows very low-level optimizations. By contrast, on a PC, you have ZERO control over when your application even runs; your at the mercy of the OS thread scheduler. You can't do the same type of low-level optimizations that you can on a console; that's the entire POINT of consoles in the first place. On consoles, you can ensure thread execution down to a specific timeslice if you wanted to. On PC's, you can't. That's why thread scaling drops dead after the second or third thread.

Thirdly, there's a difference between "using more cores" and "getting performance out of more cores". If a single core is able to service all the threads fast enough where the GPU isn't being starved, you won't gain any performance benefit from using more cores, as the GPU is already the bottleneck. It's pretty much that simple.

Forthly, you have Amdahl's Law to consider:

Amdahl's law - Wikipedia, the free encyclopedia

Specifically, your maximum improvement by making the code more parallel is inherently limited by the serial portions of code. If 90% of the code is serial, your maximum performance improvement is 10%. And most code is serial in nature; that's why the GPU subsystem, which is inherently parallel in the first place [hence, GPUs], is the only thing that typically gets it's own threading. And even that is limited by existing APIs.


The problem here is people like you want PCs to be something they are not capable of being.
 
Well that sucks. So much for "earlier than expected release".

Ha ha. I just took the SteamVR test and my FX-9370 and R9 290 passed with flying colors. I even outscored most of the Intel CPU / GTX 970 combos. As much as I want a new CPU, this 5 year old build keeps on keeping on with minor upgrades yearly. Just on the video card front I have gone from GTX 460 SLI, to HD 7950, to HD 7950 Crossfire, to R9 290.

I can't complain, but damn I want to something new every so often. :)
 
You don't really get how Software works, do you?
First and foremost: no one manually plops in X86 opcodes anymore; most release builds just plop the -O2 switch and call it a day. And the whole "Its X86 so it's easier to port" nonsense is exactly that; everything is coded at a higher level, and with the exception of platform specific APIs, code is interchangeable between different HW platforms.
Secondly, embedded systems are different the general purpose PCs. On embedded systems, you have ONE HW specification you can code to, which allows very low-level optimizations. By contrast, on a PC, you have ZERO control over when your application even runs; your at the mercy of the OS thread scheduler. You can't do the same type of low-level optimizations that you can on a console; that's the entire POINT of consoles in the first place. On consoles, you can ensure thread execution down to a specific timeslice if you wanted to. On PC's, you can't. That's why thread scaling drops dead after the second or third thread.
Thirdly, there's a difference between "using more cores" and "getting performance out of more cores". If a single core is able to service all the threads fast enough where the GPU isn't being starved, you won't gain any performance benefit from using more cores, as the GPU is already the bottleneck. It's pretty much that simple.
The problem here is people like you want PCs to be something they are not capable of being.

That is why Vulkan and DX12 are here to take it closer to what it should be. And that single cpu core is fast enough to feed the gpu then what is the program doing nothing much. When you move bottlenecks you can't complain about it not working like it used to that book is closed. If your game is not using gpu enough or does not need more cores to do the same thing it just means your game (engine) is not demanding enough. It would suffice doing that stuff in older API where all the old bottlenecks are still there..

The problem is that game developers sat on their hand pretty much with some exceptions (No one would have thought a decade ago that EA of all companies would play such a part in this)
 
Now that DX12 and Vulkan are here then I would have to say NO!
To offset this comment is that consoles are using 6 or 7 cores now for their high end games which means that however you want to spin it gaming should benefit greatly from more cores now more then ever.
16 core desktop cpu this year don't think so either but the idea that you can get there is better then the limping along with IPC as your champion which did what exactly for computing needs on our desktop ?

It gave people in the gaming industry a complacent attitude settling with the DX9-11 performance and just not being able to move forward even if the graphics cards were still increasing the cpu was not doing anything more then couple percentage if you used a new compiler :) .

If you look at Battlefield 4 on the PS4 you can not say that IPC is king ....

There is not a single DX 12 Benchmark where a FX beats an i3. Cores are good to have but are not needed. IPC is needed for sure.
 
Back
Top