AMD Zen Rumours Point to Earlier Than Expected Release

So I say Carrizo is HSA compliant as opposed to Kaveri HSA ready that it was hardware that made the difference. Now look at your statement. Go ahead look at it carefully... Now it seems I did post proof showing the design difference that made Carrizo Compliant rather than Ready as its previous kin. Hence I was in fact COMPLETELY correct. So that must be you were in fact wrong. And before you say you didn't say that, you insinuated it quite clearly in the post of yours I linked in this post.


We were talking about two different things, I was talking about a complete system you weren't and Carrizo won't work as intended as it doesn't have dual channel memory. WTF does the interconnect do with it?

I will answer that it has to be scalable and coherent in regards to a complete system architecture for HSA, because it will not work with a discrete GPU even AMD's fire gl's but its not a fault of its own, its just that the fire gl boards aren't capable of this yet...... as I stated you don't know what that means, just throwing words out there because you read shit does not mean you are correct. Carrizo's interconnect is not capable of both those things!

http://wccftech.com/amd-carrizo-apu-architecture-hot-chips/

Although the “Carrizo” and the “Carrizo-L” accelerated processing units carry essentially the same code-names, the chips are poles apart and are powered by very different technologies. The “Carrizo” APUs integrate up to four high-performance “Excavator” x86 cores, Radeon R7 graphics engine based on the GCN 1.2 architecture, a dual-channel DDR3/DDR4 memory controller as well as full HSA [heterogeneous system architecture] 1.0 implementation. By contrast, the “Carrizo-L” APUs feature up to four low-power Puma+ x86 cores, Radeon R-series graphics engine based on the GCN 1.0 architecture as well as a single-channel DDR3 memory controller.
https://en.wikipedia.org/wiki/Puma_%28microarchitecture%29

Want to read this?

What kind of memory controller does it have?
 
Last edited:
We were talking about two different things, I was talking about a complete system you weren't and Carrizo won't work as intended as it doesn't have dual channel memory. WTF does the interconnect do with it?

I will answer that it has to be scalable and coherent in regards to a complete system architecture for HSA, because it will not work with a discrete GPU even AMD's fire gl's but its not a fault of its own, its just that the fire gl boards aren't capable of this yet...... as I stated you don't know what that means, just throwing words out there because you read shit does not mean you are correct. Carrizo's interconnect is not capable of both those things!

http://wccftech.com/amd-carrizo-apu-architecture-hot-chips/


https://en.wikipedia.org/wiki/Puma_%28microarchitecture%29

Want to read this?

What kind of memory controller does it have?

Look one last time:

YOU ARE TALKING ABOUT HOW IT IS DONE NOW.

I AM TALKING ABOUT HOW IT COULD BE DONE OR WILL BE DONE.

I mentioned Carrizo as a point of going forward, the first HSA compliant and that Hardware was indeed part of the equation not solely software as was being implied.

I posted plenty of articles speaking to the HSA implemented parts of Carrizo. I do not care nor does it change my point what Carrizo is doing in a particular laptop. I am looking at the destination not the starting point so much.

I don't get why you cant understand this. I will ask again, Is English your primary language? Seems you just don't understand what it is I am saying like we need a translator. I am being serious and not condescending. It would go a long way to explain why you always skew my points as meaning something else.

By the way read that already as well. Told I have read nearly everything HSA and AMD over the past year and a half to 2 years.

http://www.extremetech.com/mobile/207229-207229

That article covers most of what I have said.

Hold on that isn't the article. Some how the one I used in the quotes isn't linked... back in a sec.
 
Look one last time:

YOU ARE TALKING ABOUT HOW IT IS DONE NOW.

I AM TALKING ABOUT HOW IT COULD BE DONE OR WILL BE DONE.

I mentioned Carrizo as a point of going forward, the first HSA compliant and that Hardware was indeed part of the equation not solely software as was being implied.

I posted plenty of articles speaking to the HSA implemented parts of Carrizo. I do not care nor does it change my point what Carrizo is doing in a particular laptop. I am looking at the destination not the starting point so much.

I don't get why you cant understand this. I will ask again, Is English your primary language? Seems you just don't understand what it is I am saying like we need a translator. I am being serious and not condescending. It would go a long way to explain why you always skew my points as meaning something else.

I did not skew your points, your points are missing a huge reason why HSA hasn't taken off yet and that is my point its been 4 years since HSA conception, It will happen in the near future, to what degree TBD, but it won't be right now because of what I have stated. Is that plain for you? Telling someone is wrong because they are talking about the technically merits for the reasons why it hasn't, which by the by are well above your head and then trying to redirect the conversation while looking down and talking shit doesn't get you far. If you want to talk shit, you better be ready to back your crap up because you are going to get face full of it when it comes back.

OEM have little to do with the memory controller dude, so don't even bother looking for a link that says it does, the reason why kaveri did is because they used a separate chip...... if I'm not mistaken.
 
I did not skew your points, your points are missing a huge reason why HSA hasn't taken off yet and that is my point its been 4 years since HSA conception, It will happen in the near future, to what degree TBD, but it won't be right now because of what I have stated. Is that plain for you? Telling someone is wrong because they are talking about the technically merits for the reasons why it hasn't, which by the by are well above your head and then trying to redirect the conversation while looking down and talking shit doesn't get you far. If you want to talk shit, you better be ready to back your crap up because you are going to get face full of it when it comes back.

OEM have little to do with the memory controller dude, so don't even bother looking for a link that says it does, the reason why kaveri did is because they used a separate chip...... if I'm not mistaken.

I have backed up and explained in great detail.

This circle crap has to stop. I mention that hardware is a component of HSA. You (collective as in everyone that questioned me) ask me to prove it. I say Carrizo is HSA compliant rather than Ready as previous models, then asked to prove it. So I link numerous articles, albeit one didn't get linked, thereby proving what I said and then you Skew the argument from the premise to what actual exists or some memory configuration which I was never trying to debate, as far as single channel not unified.

I never argued HSAs existence in the now nor Have I Debated It. AMDs market share is enough proof why it hasn't yet in the consumer PC space, excluding consoles.

This whole downhill debate started with the Mention of HSA and then the occlusion of unified memory which is what HSA REQUIRES. Then some nitpicking aspects just to obfuscate the arguments. Unnecessary!
 
I have backed up and explained in great detail.

This circle crap has to stop. I mention that hardware is a component of HSA. You (collective as in everyone that questioned me) ask me to prove it. I say Carrizo is HSA compliant rather than Ready as previous models, then asked to prove it. So I link numerous articles, albeit one didn't get linked, thereby proving what I said and then you Skew the argument from the premise to what actual exists or some memory configuration which I was never trying to debate, as far as single channel not unified.

I never argued HSAs existence in the now nor Have I Debated It. AMDs market share is enough proof why it hasn't yet in the consumer PC space, excluding consoles.

This whole downhill debate started with the Mention of HSA and then the occlusion of unified memory which is what HSA REQUIRES. Then some nitpicking aspects just to obfuscate the arguments. Unnecessary!


I didn't ask you to prove Carrizo was HSA compliant, nor i don't think any one did? I think what was asked was what does the interconnect have to do with HSA complacency......... Another words what are the specifications for an interconnect to be HSA compliant. And as I stated you will not answer that nor did you even broach the question.
 
I didn't ask you to prove Carrizo was HSA compliant, nor i don't think any one did? I think what was asked was what does the interconnect have to do with HSA complacency......... Another words what are the specifications for an interconnect to be HSA compliant. And as I stated you will not answer that nor did you even broach the question.

I was trying to find the slide pic(not AMD I don't believe) That shows Memory access from both the CPU and GPU and what was necessary to be HSA compliant. Alas cant find it right now. But it showed the difference between standard to Ready and Compliant. Standard having a separate CPU and GPU pool. Ready having a separate CPU and GPU pool but sharing a virtual pool( I think I remember that correctly, probably not). And lastly Compliant having both CPU and GPU sharing the same physical memory pool. That was the difference in actual physical form.
 
I was trying to find the slide pic(not AMD I don't believe) That shows Memory access from both the CPU and GPU and what was necessary to be HSA compliant. Alas cant find it right now. But it showed the difference between standard to Ready and Compliant. Standard having a separate CPU and GPU pool. Ready having a separate CPU and GPU pool but sharing a virtual pool( I think I remember that correctly, probably not). And lastly Compliant having both CPU and GPU sharing the same physical memory pool. That was the difference in actual physical form.

Unified memory is certainly the kingpin of HSA. It removes one of the biggest time-wasters in GPGPU computing. Now for the HPC folks that would necessarily require its implementation in a discrete GPU, which would require a CPU on the GPU to address the same memory, so that's still a bit further away. But for consumer software it would also eliminate one of the bigger penalties that currently exist when trying to parallelize tasks on the GPU.

If AMD has finally been able to produce a hardware and software solution, then yay, that's great progress.
 
Unified memory is certainly the kingpin of HSA. It removes one of the biggest time-wasters in GPGPU computing. Now for the HPC folks that would necessarily require its implementation in a discrete GPU, which would require a CPU on the GPU to address the same memory, so that's still a bit further away. But for consumer software it would also eliminate one of the bigger penalties that currently exist when trying to parallelize tasks on the GPU.

If AMD has finally been able to produce a hardware and software solution, then yay, that's great progress.

Now I know it is just dreaming but this would be quite the boon to gaming just by increasing the minimum frame rates. Like how Mantle did ( I mean in the way of increasing minimums, not really touching maximum frame rates ). Those BF4 Mantle graphs were the smoothest frametime graphs I have ever seen.
 
4Hi HBM is 2x 128bit channels per die. 8 channels and 1024 bits. If they're using unstacked DDR4 that's going to be quite a few sticks and one hell of a memory pool.
 
Unified memory is certainly the kingpin of HSA. It removes one of the biggest time-wasters in GPGPU computing. Now for the HPC folks that would necessarily require its implementation in a discrete GPU, which would require a CPU on the GPU to address the same memory, so that's still a bit further away. But for consumer software it would also eliminate one of the bigger penalties that currently exist when trying to parallelize tasks on the GPU.

If AMD has finally been able to produce a hardware and software solution, then yay, that's great progress.

As a general rule, you don't want to do a lot of memory transfers across the PCI-E bus, since the bus itself becomes a major bottleneck. That's why discrete GPUs have a ton of VRAM that gets loaded in ahead of time. This is a significant limiting factor in GPGPU applications, which are harder to pre-load into VRAM ahead of time.

Even integrated GPUs have memory access issues; that's why Intel plops high-speed ESRAM on it's chips to act as a high-speed memory buffer.

So yeah, unified memory access has it's own issues that need addressing, and claiming it solves every problem under the sun is silly.
 
As a general rule, you don't want to do a lot of memory transfers across the PCI-E bus, since the bus itself becomes a major bottleneck. That's why discrete GPUs have a ton of VRAM that gets loaded in ahead of time. This is a significant limiting factor in GPGPU applications, which are harder to pre-load into VRAM ahead of time.

Even integrated GPUs have memory access issues; that's why Intel plops high-speed ESRAM on it's chips to act as a high-speed memory buffer.

So yeah, unified memory access has it's own issues that need addressing, and claiming it solves every problem under the sun is silly.

And claiming it has no merit is even more silly. No one is claiming it fixes everything, only that it opens some doors and in the least it is a different way to do it.
 
And claiming it has no merit is even more silly. No one is claiming it fixes everything, only that it opens some doors and in the least it is a different way to do it.


Look at the amount of bandwidth that vram to a gpu has and then look at the bandwidth that the pci-e bus has at maximum. Then think about how much of the bandwidth from vram to gpu and vice versa is used in modern application. Guess what its almost all used.

If at all possible right now you will not want to transfer anything over the pci-e bus unless its something that is happening in the background for things to be done at a future time and even this will drop GPU performance because the GPU has to handle the transfer, and we see this happening right now too. You can't have it both ways, either you use the performance for the task at hand or split it up between that and background transfer operations (the copy queue). Ideal right now you don't want to be doing too much transferring. But because some cards (older gens vs. newer gens, same gens too) have less amount of vram at same performance levels, background transfers come in handy.

Yeah and IGPU's do have their issues, this is what I was taking about, with different bit sizes of different processors vs. the bit size of the ram, latency is also a big issue here as well.
 
As a general rule, you don't want to do a lot of memory transfers across the PCI-E bus, since the bus itself becomes a major bottleneck. That's why discrete GPUs have a ton of VRAM that gets loaded in ahead of time. This is a significant limiting factor in GPGPU applications, which are harder to pre-load into VRAM ahead of time.

Even integrated GPUs have memory access issues; that's why Intel plops high-speed ESRAM on it's chips to act as a high-speed memory buffer.

So yeah, unified memory access has it's own issues that need addressing, and claiming it solves every problem under the sun is silly.

The entire point of having true unified memory is to remove the PCI-e bottleneck. Of course for a discrete GPU that means having a CPU on the same die to handle serialized tasks, so you don't have to go back to the system CPU. I know nvidia was R&D'ing this concept awhile ago but I think they gave up.
 
The entire point of having true unified memory is to remove the PCI-e bottleneck. Of course for a discrete GPU that means having a CPU on the same die to handle serialized tasks, so you don't have to go back to the system CPU. I know nvidia was R&D'ing this concept awhile ago but I think they gave up.

Wasn't that with an ARM chip onboard?
 
Yeah it was pretty much an SOC, tegra type chip, might have actually been part of the tegra line, don't know though.
 
The entire point of having true unified memory is to remove the PCI-e bottleneck. Of course for a discrete GPU that means having a CPU on the same die to handle serialized tasks, so you don't have to go back to the system CPU. I know nvidia was R&D'ing this concept awhile ago but I think they gave up.

And you won't have discrete stand alone GPUs go away because of power/thermal constraints. No matter what you do, your memory transfers are limited by whatever external bus you connect the GPU to, so HSA benefits are inherently limited to APUs. And even then, you are going to remain hopelessly limited by DRAMs inherently low bandwidth.

Unless AMD is willing to put high bandwidth memory directly on the CPU die (which is VERY expensive), the benefits to HSA for APUs are minimal, limited by memory bandwidth. For discrete GPUs, there is zero bandwidth since a copy to VRAM remains the best solution due to PCI-E memory access limitations.

In short, HSA is just like Mantle; a solution that benefits AMD's current product lineup at the expense of everyone else. It's marketing to make it's products look better then they are; simple as that.
 
And you won't have discrete stand alone GPUs go away because of power/thermal constraints. No matter what you do, your memory transfers are limited by whatever external bus you connect the GPU to, so HSA benefits are inherently limited to APUs. And even then, you are going to remain hopelessly limited by DRAMs inherently low bandwidth.

Unless AMD is willing to put high bandwidth memory directly on the CPU die (which is VERY expensive), the benefits to HSA for APUs are minimal, limited by memory bandwidth. For discrete GPUs, there is zero bandwidth since a copy to VRAM remains the best solution due to PCI-E memory access limitations.

In short, HSA is just like Mantle; a solution that benefits AMD's current product lineup at the expense of everyone else. It's marketing to make it's products look better then they are; simple as that.

Ok so either you are playing with words or are constraining your point to quite a narrow view. HSA benefits being very minimal? So maybe you didn't see the test of the 7850K against the 3960k/x (don't recall exactly which, Intel has too many numbers) where without HSA the Intel was well above the 7850K as one would expect. With HSA the 7850K was far ahead. Seems the benefits are more than minimal/trivial. Also explains why so many are interested in seeing that tech come forward and why some others do all they can to discredit it.

And by the way HSA is an open source product, if you will. The HSA foundation proves the benefits are far more than just for AMD.
 
And you won't have discrete stand alone GPUs go away because of power/thermal constraints. No matter what you do, your memory transfers are limited by whatever external bus you connect the GPU to, so HSA benefits are inherently limited to APUs. And even then, you are going to remain hopelessly limited by DRAMs inherently low bandwidth.
Not entirely away, but I wouldn't be surprised if discrete GPUs started getting packaged like CPUs and used sockets.

Unless AMD is willing to put high bandwidth memory directly on the CPU die (which is VERY expensive), the benefits to HSA for APUs are minimal, limited by memory bandwidth. For discrete GPUs, there is zero bandwidth since a copy to VRAM remains the best solution due to PCI-E memory access limitations.
That CERN presentation said 8 channels on a dual CPU chip. That would leave Zen a likely 16 core, 4 channel chip not including a potential GPU or 2nd CPU affixed. AMD already indicated they were working on a performance oriented chip, so expensive seems likely. I'd think 16 core CPU + APU on an interposer with HBM for the GPU and DDR4 off chip for the CPU would fit that bill nicely. Bonus points if the CPU can easily access the HBM.

In short, HSA is just like Mantle; a solution that benefits AMD's current product lineup at the expense of everyone else. It's marketing to make it's products look better then they are; simple as that.
There are more companies than just AMD working on HSA. I'm not sure how that is at the expense of everyone else. Currently it's all the mobile IHVs working together on what they think is the future.
 
Ok so either you are playing with words or are constraining your point to quite a narrow view. HSA benefits being very minimal? So maybe you didn't see the test of the 7850K against the 3960k/x (don't recall exactly which, Intel has too many numbers) where without HSA the Intel was well above the 7850K as one would expect. With HSA the 7850K was far ahead. Seems the benefits are more than minimal/trivial. Also explains why so many are interested in seeing that tech come forward and why some others do all they can to discredit it.

AMD took some massively parallel operation, offloaded it to a processor that specializes in performing massively parallel operations, and saw a speedup. Or essentially, they did exactly what OpenCL/CUDA already have the ability to do. Offloading tasks from the CPU is nothing new.

And by the way HSA is an open source product, if you will. The HSA foundation proves the benefits are far more than just for AMD.

So is OpenCL. The difference: OpenCL code would also benefit Intel CPUs or NVIDIA GPUs.

Not entirely away, but I wouldn't be surprised if discrete GPUs started getting packaged like CPUs and used sockets.

Doubt it, again, because of cost.

That CERN presentation said 8 channels on a dual CPU chip. That would leave Zen a likely 16 core, 4 channel chip not including a potential GPU or 2nd CPU affixed.

We already know 8 Channels on the 32-core Opteron. This raises the question of where the memory controller is on the package. If we assume Quad-core blocks, this implies a single channel available to a basic Quad-core module. Very interested to see the block diagram of Zen with this in mind.
 
Doubt it, again, because of cost.
DDR4 and WideIO are designed to sit on top of the processor. HBM as we've seen sits in stacks around the processor on an interposer. Not currently cheap, but that seems to be the way things are going.

We already know 8 Channels on the 32-core Opteron. This raises the question of where the memory controller is on the package. If we assume Quad-core blocks, this implies a single channel available to a basic Quad-core module. Very interested to see the block diagram of Zen with this in mind.
For clarification that 32 core Opteron is 2 CPUs sharing a socket according to that CERN presentation. I'd bet there are 2 memory controllers (one for each processor) connected through some sort of interconnect. That interconnect being that 100GB/s low latency, coherent thing AMD was talking about a month or so ago. That would make Zen models varying speeds/configurations of CPU+CPU or CPU+GPU. Dual GPU would be... interesting. No idea where all those HBM stacks would go. That same interconnect would also have to replace PCI-E.
 
DDR4 and WideIO are designed to sit on top of the processor. HBM as we've seen sits in stacks around the processor on an interposer. Not currently cheap, but that seems to be the way things are going.

First problem: Cost. Even DRAM is expensive, HBM even more so. APUs are designed, first and foremost, as budget parts to replace standard CPU/GPU configurations, so driving up baseline costs makes then uncompetitive. And AMD can't afford to sell products at or near a loss right now.

Second problem: Die space. HBM has a large footprint, which limits how much of it you could physically plop on a CPU die. It's the same reason why budget chips don't typically have an L3 cache; you can't justify the performance benefit/cost relationship.

Third problem: Power draw. While drawing significantly less power then GDDR did, HBM still draws a lot of power from a CPU perspective. This reduces your thermal margin, which forces other design sacrifices.

So yeah, AMD may release a chip with on-die HBM, but it won't be cost-competitive with other solutions.

For clarification that 32 core Opteron is 2 CPUs sharing a socket according to that CERN presentation. I'd bet there are 2 memory controllers (one for each processor) connected through some sort of interconnect. That interconnect being that 100GB/s low latency, coherent thing AMD was talking about a month or so ago. That would make Zen models varying speeds/configurations of CPU+CPU or CPU+GPU. Dual GPU would be... interesting. No idea where all those HBM stacks would go. That same interconnect would also have to replace PCI-E.

My same point still applies. If that's the case, you still get 8 total channels split across two CPUs with a combined 32 cores. This implies two 16 core units, each with 4 channels, which points to a 4-core unit being the baseline module for Zen.

This would imply that, given a quad-core Zen would only have one memory channel, that AMD doesn't plan to release anything less then an octo-core CPU, given that a single memory channel would be a massive system bottleneck and adding a second is probably cost-prohibitive.

The fact AMD feels the need to go so wide is VERY worrying to me, since this seems like the BD strategy taken to extreme. Software isn't going to scale, so why does AMD feel the need to make its new CPUs so wide?
 
You don't just add a memory controller per cluster, you add one per die. In this case AMD probably added a quad channel controller on their 16cores and more likely a dual for 4/8 core chips. There could be some high end 8 core chips with quad channel too.

A socketed GPU would be fun and ultimately useless. Too many variables to make one work, mainly space and power usage.
 
First problem: Cost. Even DRAM is expensive, HBM even more so. APUs are designed, first and foremost, as budget parts to replace standard CPU/GPU configurations, so driving up baseline costs makes then uncompetitive. And AMD can't afford to sell products at or near a loss right now.
The DDR4 wouldn't be on the package. Maybe in the future, but probably not for a few years. At that point what does a mobo look like with the CPU and all the ram condensed into a single socket? AMD also said it's going to be a premium part, so I'm expecting some expensive features. What's Intel charge for a high end CPU? $1000? That's significantly more than AMD's fury lineups with HBM. APUs are normally budget parts, but do they have to be? If you could achieve mid or even high performance why not?

Second problem: Die space. HBM has a large footprint, which limits how much of it you could physically plop on a CPU die. It's the same reason why budget chips don't typically have an L3 cache; you can't justify the performance benefit/cost relationship.
If you were designing a performance part, it seems reasonable. HBM has a footprint, but the performance may be warranted. Even more if that provides high bandwidth memory for the actual CPU side through that interconnect. 100GB/s memory bandwidth for a CPU is nothing to scoff at. Current quad channel DDR4 is half of that interconnect speed and may even reduce latency based on the usage scenario.

Third problem: Power draw. While drawing significantly less power then GDDR did, HBM still draws a lot of power from a CPU perspective. This reduces your thermal margin, which forces other design sacrifices.
Maybe, but it does likely fit within a CPU socket. AMD's CEO has also stated they're pushing for low power, borderline mobile, designs. So while power and thermals may be an issue, that seems to be something they're attempting to work around.

A socketed GPU would be fun and ultimately useless. Too many variables to make one work, mainly space and power usage.

Not useless if you were designing a platform with little to no expansion slots. Socketed GPUs would also be coplanar to your CPU which could do a lot for cooling. Throw in HSA and shared memory and expansion slots may not work as well.
 
And you won't have discrete stand alone GPUs go away because of power/thermal constraints. No matter what you do, your memory transfers are limited by whatever external bus you connect the GPU to, so HSA benefits are inherently limited to APUs. And even then, you are going to remain hopelessly limited by DRAMs inherently low bandwidth.

I agree that the primary benefit of HSA is a true unified memory (where zero copy is possible), so you're looking at some kind of combined GPU / CPU chip, whether that's an APU or an IGP or an ARM SoC or whatever Project Denver thing nvidia was working on. And that concept, by itself, is not really HSA specific. It's just the AMD / industry marketing term for it.

The DRAM bandwidth isn't a consideration in a unified memory solution. The bandwidth constraints are the reason for wanting unified memory, so that you don't have to copy back and forth between the GPU and CPU.
 
Probably worth adding that if they are going for unified memory some of that memory almost has to be on the chip. HBM for example is 3000+ connections according to the JEDEC specification. Doubt WideIO is much better, and that's still a year or more away. That'd be on top of the DDR4, wide interconnect. and other connections. The socket would be laughably huge to accommodate that. Not to mention all the traces on the motherboard.
 
Not useless if you were designing a platform with little to no expansion slots. Socketed GPUs would also be coplanar to your CPU which could do a lot for cooling. Throw in HSA and shared memory and expansion slots may not work as well.

It would increase the power draw over the motherboard by a huge margin, it would also increase the component count by quite a bit where we are already short on space for components. You could take an EATX motherboard and put a GPU socket (which would take years to make standard between both Nvidia and AMD) and remove most of the sockets and watch it not sell well at all.

for GPGPU needs you could try this, but no ones going to buy off on it seeing as if one of the GPU fries they would rather replace the card then take the chance it would take the entire system with it.
 
you add one per die. In this case AMD probably added a quad channel controller on their 16cores and more likely a dual for 4/8 core chips. There could be some high end 8 core chips with quad channel too.

A socketed GPU would be fun and ultimately useless. Too many variables to make one work, mainly space and power usage.

AMD has done this in the past with Opteron, with two chips with a total of 16 cores so I think its definitely possible.
 
Probably worth adding that if they are going for unified memory some of that memory almost has to be on the chip. HBM for example is 3000+ connections according to the JEDEC specification. Doubt WideIO is much better, and that's still a year or more away. That'd be on top of the DDR4, wide interconnect. and other connections. The socket would be laughably huge to accommodate that. Not to mention all the traces on the motherboard.
If HBM is there then it's probably on package and we'll see a fury-ish sized chip.
 
AMD has combined a socketed GPU with an Opteron in the past?

no but it is better to look at it like putting 7850Ks together they are APUs just like this ZEN. They are likely using 16 core APUs together as/like the opterons.
 
well I think it will be for some users not all, people that really need 32 physical cores are few and far between.

Even for my rendering work, I won't spend more than ~1200 per cpu, just going beyond that, the performance to cost ratio sky rocket....

And if 16 cores per CPU is Zen's max and they perform well enough to compete with Intel's high end Xeon's I think we will see them priced at 2k. With something like a x2 on a MCM, those will probably be much more than having 2 seperate CPU's, then again I haven't looked at prices of the dual opterons on the same MCM, so I might be quite wrong too lol.

But in all likely hood, I would like to see 8 core Zens compete with 8 core Intel's and have a price tag of $400 bucks.....

The CPU market has been stagnant for too long now.
 
If they can just release a quad core that is ahead of the current offerings of intel, ill jump back to AMD. My personal needs are in single threaded performance, but i would like to have 6-8 cores if it doesnt sacrifice OC to much while delivering me my target level of performance (roughly 30% more IPC then my current 3960x running as a quad core at 4.7) then ill buy it anyway, even if it costs me 500 more than the quad (say 1000 vs 500) just to have the luxury.

From what i can tell, 40% on their current gen puts me close to Sandy performance, which is not enough. I had high hopes for the Fury X, end up buying a 980ti and a new found hatred for Su and that other c**t that said it was a overclocking dream (thats right, im calling him a C, as per Aussie tradition), so while I find myself reading every news article on Zen, this time I have no hope at all, but itll make my day if they pull off something drastic.

Someone correct me if I'm wrong about it matching Sandy at those numbers, i havent used a AMD CPU since i had a Phenom 6 core or x6 that i knew nothing about anyway as i was but a sparkle in the overclocking mothers eye at the time, lol
 
If they can just release a quad core that is ahead of the current offerings of intel, ill jump back to AMD.
Someone correct me if I'm wrong about it matching Sandy at those numbers, i havent used a AMD CPU since i had a Phenom 6 core or x6 that i knew nothing about anyway as i was but a sparkle in the overclocking mothers eye at the time, lol

Unrealistic expectations either way. To many people speculating not to sure where this whole "it has to beat Intel" parade comes from. The R&D budget for AMD cpu is low compared to Intel.

If you take some metrics as 10:1 then project them on grocery shopping , by having 10 times the money for it you can buy better tasting food lets say $100 and you expect AMD to have the same quality of food and taste for only $10 . You can say that this comparison is not the best or even close to what is happening in the tech world today but it does reflect better on the nonsense about beating Intel you and others proclaim here every time a post like yours comes along ....
 
Unrealistic expectations either way. To many people speculating not to sure where this whole "it has to beat Intel" parade comes from. The R&D budget for AMD cpu is low compared to Intel.

If you take some metrics as 10:1 then project them on grocery shopping , by having 10 times the money for it you can buy better tasting food lets say $100 and you expect AMD to have the same quality of food and taste for only $10 . You can say that this comparison is not the best or even close to what is happening in the tech world today but it does reflect better on the nonsense about beating Intel you and others proclaim here every time a post like yours comes along ....

I dont care if it costs the same as the flagship intels, as long as it performs the same. What Im getting at is that if they dont release a CPU that fits what I need then I wont be jumping on board with Zen. But Im hoping they do. I too keep seeing proclamations of destroying Intel, and that whole argument, what i want to know is where im getting that 30% IPC over my current processor, and could care less about brand wars. In fact if AMD release a CPU thats say, on par with the hypothetical 7700k kaby lake, (which i would hope offers me this increase from my 3960x), and it costs the same to build or maybe even a few dollars more, i would buy the AMD just because I would prefer to help the little man.

So its not about beating Intel, its coming to the table with a CPU that fits my needs, namely better than SBE, but it would seem that is a long shot. Me referring back to the Fury x is expressing my legitimate disappointment in a product i was told was supposed to be the best, when it clearly wasnt. Edit - that may not be relevant here, point is once Ive been lied to i find it hard to regain trust. They havent lied to me yet, but i expect they will. Well at least the mainstream market will hopefully see some competition again.
 
Last edited:
Back
Top