AMD Further Unveils Zen Processor Details

A compiler could be one day smart enough to notice patterns we're unable to see or simply brute force its way towards the best possible binary for a given combination of source code and platform hardware.

They already are. Back in my Demo Scene days in the 90's when the kids had to squeeze the absolute most they could out of an architecture in order to win the competition, they'd often hyper optimize their code in assembler, and just use higher order languages (mostly C or pascal) to loop that assembler code and provide basic logic.

These days very few people are doing this. Partially because for most software there is no point in optimizing anymore. Your mp3 player or word processor are not going to be anywhere near challenging to a modern CPU, so it doesn't matter if you can squeeze another percent or two out of it. For most tasks CPU cycles are cheap, because they are just sitting there doing nothing, if you don't use them.

For areas in which you actually need to get the most out of your hardware (computation type stuff, encoding, rendering, code breaking, etc, and to a lesser extent games) in some cases we still have limited assembler hand optimizations, but in most cases a good compiler can actually create better performance from C code than a skilled programmer can by using machine code on modern systems.

The part I find particularly amazing is that the use of Java, .NET, Scala and other VM type languages in some situations can out perform statically compiled code, because they run the code directly, and can take into account the overall state of the system when they decide how to implement the code when it is run (called runtime optimizations).

On average VM type code still performs significantly worse than compiled C-type languages, but there are many cases where the opposite is true, and the virtual type languages are gaining on compiled languages every year.
 
They already are. Back in my Demo Scene days in the 90's when the kids had to squeeze the absolute most they could out of an architecture in order to win the competition, they'd often hyper optimize their code in assembler, and just use higher order languages (mostly C or pascal) to loop that assembler code and provide basic logic.

These days very few people are doing this. Partially because for most software there is no point in optimizing anymore. Your mp3 player or word processor are not going to be anywhere near challenging to a modern CPU, so it doesn't matter if you can squeeze another percent or two out of it. For most tasks CPU cycles are cheap, because they are just sitting there doing nothing, if you don't use them.

For areas in which you actually need to get the most out of your hardware (computation type stuff, encoding, rendering, code breaking, etc, and to a lesser extent games) in some cases we still have limited assembler hand optimizations, but in most cases a good compiler can actually create better performance from C code than a skilled programmer can by using machine code on modern systems.

The part I find particularly amazing is that the use of Java, .NET, Scala and other VM type languages in some situations can out perform statically compiled code, because they run the code directly, and can take into account the overall state of the system when they decide how to implement the code when it is run (called runtime optimizations).

On average VM type code still performs significantly worse than compiled C-type languages, but there are many cases where the opposite is true, and the virtual type languages are gaining on compiled languages every year.

I have a question. Compiler wise, is it even possible to have machine code instructions that point at specific cores, or, say, the "second" available register of a given type within a multi core CPU?
 
I have a question. Compiler wise, is it even possible to have machine code instructions that point at specific cores, or, say, the "second" available register of a given type within a multi core CPU?

That I am not quite as read up on. Hopefully someone else knowledgeable in here can chime in.
 
That I am not quite as read up on. Hopefully someone else knowledgeable in here can chime in.

You *could*, but that's typically a bad idea.

Remember: You are not running on an embedded system where you know exactly what is running at a specific point in time, you are running a general purpose high level OS where you, the developer, have no say whatsoever when your application actually gets to run. You can not make the assumption that any specific CPU resource is free at any specific point in time; that's for the OS and CPU Hardware to take care of. Trying to "optimize" at the instruction level within the software will very quickly lead to performance stalls within the CPU, as your application (or some other application, including the host OS) is stuck waiting for whatever resource you specifically want your code to use to free up.

Now, on an embedded system, running a RTOS, where you have full control of code execution, you could *maybe* get away with this if you spend a few years understanding exactly how every possible combination of instructions gets processed on your specific hardware, but the gains (<.1%) aren't worth the effort.

----------------------------------------

Look, I'm tired of the "Software developers aren't trying" BS, and yes, as a software developer, I can safely call it BS. Outside of very specific use cases, the overwhelming majority of code can not be broken up into threads in any way that benefits the application.

Simple rule of thumb: Do two threads need to touch the same object in memory? If yes, you probably shouldn't have two threads. Any time two threads can touch the same object, you introduce software locks, which kill performance for both threads by driving up latency. That's why specific parts of a program get threaded; taking games for an example, you may have eight or nine separate parts of the engine running in just as many threads, which will themselves have several sub-threads each. Problem is, despite the fact you have 50-60 threads, only two or three of them do major (> 1% CPU load) work. And then people who have very limited understanding of how software actually works complain "the program isn't threaded". No, it's threaded fine, it's just you have one or two threads that simply CAN'T be broken up, and oh, those threads also happen to account for 60-70% of your total processing time, because those are the main execution and primary render threads.

For the majority of your use cases, software will not scale the way you want it to scale. It's that simple. The best CPU design going forward for general purpose processing, in my opinion, looks a lot like an Intel i7; a four core CPU with 2-way SMT support. You don't need more then that for a general purpose PC.
 
You *could*, but that's typically a bad idea.

Remember: You are not running on an embedded system where you know exactly what is running at a specific point in time, you are running a general purpose high level OS where you, the developer, have no say whatsoever when your application actually gets to run. You can not make the assumption that any specific CPU resource is free at any specific point in time; that's for the OS and CPU Hardware to take care of. Trying to "optimize" at the instruction level within the software will very quickly lead to performance stalls within the CPU, as your application (or some other application, including the host OS) is stuck waiting for whatever resource you specifically want your code to use to free up.

Now, on an embedded system, running a RTOS, where you have full control of code execution, you could *maybe* get away with this if you spend a few years understanding exactly how every possible combination of instructions gets processed on your specific hardware, but the gains (<.1%) aren't worth the effort.

I asked that question (renaming registers) because I had out of order execution in mind at the time.
Seeing that this AMD gen will have so many cores, it just sounds like a bummer if they won't get to be used.
I was wondering if it's possible and/or viable to somehow 'hint' the CPU/OS scheduler to use, say, the first available idling core, because you know you'll be needing additional time for an incoming 'difficult' instruction. Or you want to have a core begin preparing for a specific data segment.
Maybe they already do this, I don't know.
 
Seeing that this AMD gen will have so many cores, it just sounds like a bummer if they won't get to be used.

That is one reason I expect most users will get the 4C / 8T Zen processors instead of the 8C / 16T.


I was wondering if it's possible and/or viable to somehow 'hint' the CPU/OS scheduler to use, say, the first available idling core, because you know you'll be needing additional time for an incoming 'difficult' instruction.

This is similar to what over-provisioning for HT does. You make a wider execution path than most work loads need and have extra ALU/AGU units that can be used on the other thread. However these execution units will not share across cores.
 
Last edited:
The reason is simple. They are both launched after support ended for Windows 7 and 8.

Partially true.

Mainstream support has ended for Windows 7, (but will not have ended for Windows 8.1 by then, that doesn't end until January 2018)

Even so, in the past, the end of mainstream support just means that Microsoft does not add new features to an OS. It has never before meant that individual hardware manufacturers can't or won't release new drivers for it.

Microsoft has twisted AMD and Intel's arms on this one, to try to force the holdouts into Windows 10.

Linux support also requires you use a relatively new version.

Well, sortof. For the CPU portions to work you'll just need a kernel with added support. You can install a current kernel on any Linux distribution, regardless of how old and it should work just fine.

If you want integrated GPU support, it becomes trickier, but not impossible. You'll want the latest mesa X11 packages, which can usually be obtained through addon repositories, even for older releases.

As an example, Ubuntu 14.04 LTS released in April 2014 will have support until April 2019. Heck, even Ubuntu 12.04 will still be supported until April of next year.

It will of course be MUCH easier to just use the latest release of your favorite distribution right after launch.
 
Microsoft has twisted AMD and Intel's arms on this one, to try to force the holdouts into Windows 10.

What about enterprise customers? I mean 2 years ago at hospital system (10s of thousands of PCs) I work we finally switched to windows 7. The next OS switch will not happen for several years.
 
Last edited:
To touch on the linux note; linux is a far more modular beat than windows is and it's been designed that way since birth. First, off you can straight up build your own OS so if you don't like how a window manager changes from one version to another you can take that old code base and keep it going. Second, say you have ubuntu 4.04 and you want to keep that going (for some reason), you can compile the kernel yourself and manually install it even if support from canonical is long gone.

I get the windows 10 thing though. After the crazy windows XP hold out issues, I'm sure microsoft just wants to maintain a single OS.
 
What about enterprise customers? I mean 2 years ago at hospital system (10s of thousands of PCs) I work we finally switched to windows 7. The next OS switch will not happen for several years.


The article suggests there will be no exceptions for Enterprise. If you want Zen or Kaby Lake and Windows, it's Windows 10, or bust.

They postulate we might see massive Skylake buys from companies who are not ready to transition their OS yet.
 
In the old days Assembly optimizing could provide multiple orders of magnitude difference in performance.
Early processors didn't have complex functions, or when they did implemented them using other built in instructions.

The 6502 for example does not have a multiple instruction, so if you wanted to multiple a number you had to say x * y, you would add x to x, y times. multiplying by 10,000 would require something in the realm of 30,000 cycles, if the add instruction had a modest(for the time) 3 cycle latency. And remember you only have 1.52 million cycles per second (varies between different world regions based on tv standard) on yer commodore 64. And a whole lot of those you can't use because the VIC-II video chip turns the processor off every 8th scanline.

Early motorola FPU's actually had to be sent their instructions from the CPU, which did this by using the interrupt line freezing the cpu for each and every fpu instruction, a 30mhz chip in this setup might do 300k flops. Moving the fpu on die with the cpu really really helped performance.

For a long time this didn't get greatly better, perform a SINE function on the fpu of a 486 processor can take 100's of cycles, or you can use a lookup table and spend 10 or so cycles granted its not as accurate.

Hell anything x87 is a pain in the ass, SSE and AMD64 are godsends, With SSE(and friends), assembler coding isn't that bad for targeted hot spots in program code even to this very day.

Compilers are so much better, and the CPU itself really picks up alot of the slack where compilers are still deficient. Modern CPU's can do multiply's in 1 cycle, So even if it misses the shift operator optimizations when you multiple an integer by a multiple of 2, who cares(as one random example from many possible).

In a way I miss the old stuff a bit, the early platforms where always defined by perfect timing, you had x cycles between each scanline, x cycles for each screen dot, Many neat quirks and oddities in early systems. Commodore 64 had memory clocked at double speed, and the VIC-II accessed the memory on the low side of the clock, and the cpu accessed the memory on the high side of the clock, meaning the cpu did not have to be halted during the entire scanline(just every 8th scanline as it filled in the color ram from the new character line), such as was necessary on most other platforms of the day.
 
Last edited:
well poop. I was hoping to NOT go to Win 10, and I was hoping this CPU would finally be the competition we needed. We'll see how things play out. I'll still hold out.
Just switch to linux ;)
 
I hate to be that guy but either AMD can't make functional test samples or it's time to throw the hype train off the cliff.
To the point: in geekbench base there are some results for AMD Diesel mobo, with pair of something that looks like rumored Snowy Owl 16c32t ESs. What is the problem?
Results for it are baaaaaad:
AMD Corporation Diesel - Geekbench Browser

Now, the scaling between single core and multi core is weird, but if we look at ES name, we can guess that it is because single core was running at 2.9Ghz and as such multi core was running at 1.4Ghz. That, or Geekbench does not know how to multi socket, you take the pick. Well, let me introduce you to lowest end dual socket Xeon SKU on the market: 2603 (in particular v3, but v4 is just slightly faster).
Supermicro SYS-6028R-T - Geekbench Browser

You guys make the comparison. I'll just claim that memory controller on my 4 year old Bobcat laptop is better.
 
I hate to be that guy but either AMD can't make functional test samples or it's time to throw the hype train off the cliff.
To the point: in geekbench base there are some results for AMD Diesel mobo, with pair of something that looks like rumored Snowy Owl 16c32t ESs. What is the problem?
Results for it are baaaaaad:
AMD Corporation Diesel - Geekbench Browser

Now, the scaling between single core and multi core is weird, but if we look at ES name, we can guess that it is because single core was running at 2.9Ghz and as such multi core was running at 1.4Ghz. That, or Geekbench does not know how to multi socket, you take the pick. Well, let me introduce you to lowest end dual socket Xeon SKU on the market: 2603 (in particular v3, but v4 is just slightly faster).
Supermicro SYS-6028R-T - Geekbench Browser

You guys make the comparison. I'll just claim that memory controller on my 4 year old Bobcat laptop is better.

And your point? Are you saying we have zen here? Or are you assuming that those samples are running at any other speeds than what it says there. Too much speculation and nothing concrete. If you can give me 100% concrete proof we got zen running at 2.8 I am all ears but that just sounds like digging dirt for the sake of digging dirt.
 
And your point? Are you saying we have zen here? Or are you assuming that those samples are running at any other speeds than what it says there. Too much speculation and nothing concrete. If you can give me 100% concrete proof we got zen running at 2.8 I am all ears but that just sounds like digging dirt for the sake of digging dirt.
The irony here is that the dirt was dug for me in SA thread on Zen, where someone else stumbled upon it.
Next up: AMD marks frequencies chip runs at on the name itself. As such concluding that it is 1.4Ghz base clock with 2.9 Turbo is entirely reasonable, just like you could see 32/28 marked on the Summit Ridge ES in AotS leak.

So, unless it is someone faking it up for lulz, it is quite certainly either 2x Snowy Owl or 1 x Naples ES running.
 
I hate to be that guy but either AMD can't make functional test samples or it's time to throw the hype train off the cliff.
To the point: in geekbench base there are some results for AMD Diesel mobo, with pair of something that looks like rumored Snowy Owl 16c32t ESs. What is the problem?
Results for it are baaaaaad:
AMD Corporation Diesel - Geekbench Browser

Now, the scaling between single core and multi core is weird, but if we look at ES name, we can guess that it is because single core was running at 2.9Ghz and as such multi core was running at 1.4Ghz. That, or Geekbench does not know how to multi socket, you take the pick. Well, let me introduce you to lowest end dual socket Xeon SKU on the market: 2603 (in particular v3, but v4 is just slightly faster).
Supermicro SYS-6028R-T - Geekbench Browser

You guys make the comparison. I'll just claim that memory controller on my 4 year old Bobcat laptop is better.

It shouldn't really be a surprise. The interconnect and clustered cores is just awful scaling wise if data depends on another thread.

But you also have to assume a higher boost clock for the single threaded bench. 1.5Ghz with all cores, 2.3-2.5Ghz perhaps with a single? But Zen is looking like another Bulldozer/Barcelona moment.

The Xeons in question, in terms anyone wonders. Is 1.6Ghz Haswell Xeons without turbo.

Looking at single threaded, giving all the doubts to Zen with 1.44Ghz clock. The normalized scores becomes 1268 for Zen vs 1804 for Haswell in ST.
 
Last edited:
*Sigh* Certain folks in here bashing anything positive about AMD and proudly displaying anything unable to be proven about AMD newest Zen architecture. LOL You would think they have a vested interest in seeing AMD fail. :eek::D:rolleyes:
 
*Sigh* Certain folks in here bashing anything positive about AMD and proudly displaying anything unable to be proven about AMD newest Zen architecture. LOL You would think they have a vested interest in seeing AMD fail. :eek::D:rolleyes:
You would only think that if you think us stupid enough to think our words have consequences in AMD's chips becoming worse than they are.

Anyways, this ES leak is anything but a positive, except that apparently in best case scenario of it being single [email protected], IPC aligns with what AMD claims Zen has.

But then we have an issue of dual Snowy Owl (or Naples?) having horrendous multi core scaling.
 
From The Stilt.

The SKU for this CPU has not been leaked before (AFAIK) and it is definitely legit.
The score itself obviously doesn't represent the performance with a default configuration.

The SKU is supposed to have 1450MHz base clock, and 2.9GHz MSCB.

Turbo seems to be disabled in this one for now.
 
If Turbo is disabled then results make even less sense tbh, because these multi core results are close to perfect scaling on my system.


So yeah, i'll agree that these results should not be representative of real thing. But god damn, what the hell anyways.
 
*Sigh* Certain folks in here bashing anything positive about AMD and proudly displaying anything unable to be proven about AMD newest Zen architecture. LOL You would think they have a vested interest in seeing AMD fail. :eek::D:rolleyes:

They can post as much as they want but until there retail samples in play it should not matter much. The whole Blender demo proved this.
Some will "bash" regardless as soon as it does not beat Intel outright it will suck anyway.

Were all waiting for final clock speed rating on the retail product and then we might get a bit further.
 
If Turbo is disabled then results make even less sense tbh, because these multi core results are close to perfect scaling on my system.


So yeah, i'll agree that these results should not be representative of real thing. But god damn, what the hell anyways.

Core clusters and MCM will lower performance when it depends on a main thread due to latency and bandwidth. If it was run with a tile based renderer for example, it would scale much better. I also dont know how many threads GB4 can actually scale to.

But we got the single thread numbers. Shows the same case as the AOTS leak.
 
Core clusters and MCM will lower performance when it depends on a main thread due to latency and bandwidth. If it was run with a tile based renderer for example, it would scale much better. I also dont know how many threads GB4 can actually scale to.

But we got the single thread numbers. Shows the same case as the AOTS leak.
On a second look it is actually probably about Amdahl's law kicking in, assuming the sample indeed had turbo and SMT disabled.
Single thread IPC is a fair bit low, but a severe upgrade over Piledriver nonetheless.
 
i think people are assuming way too much off this bench. with everything wrong in the frickin benchmark description. Seriously do you all expect it to be worse than current gen? I highly doubt it. Obvious whatever it is if it is zen sample it is running severely crippled according to what everyone is saying in that forum. May be AMDs way of finding the person if someone is leaking numbers.

Then again some people here will speculate anything and everything to prove "I told you so" argument when the product comes out, hoping they were right. But this benchmark shows nothing concrete and yet some are already guaranteeing single threaded performance.

From everything seen on high level architecture overview even reading anandtech articles, they expect significant ipc improvements. So I highly doubt it sucks so bad. I'll wait for real world tests to be done on the cpu before I call it a fail.

Will amd disappoint, I am sure they will find a way. If clock speed is the only thing holding back zen that will be disappointing but that can be fixed over time. I don't expect them to beat intel but I don't expect this to be another bulldozer the zen architecture says otherwise.
 
Last edited:
well poop. I was hoping to NOT go to Win 10, and I was hoping this CPU would finally be the competition we needed. We'll see how things play out. I'll still hold out.

The kicker is for those who were hoping to take advantage of the multi core/thread aspect of Zen with DX12 will have to use W10. So there's that.
 
Man why do people hate windows 10 so bad. I would trade it for no other OS. yea microsoft wants to push it down your throat but after using it I couldn't go back. Work computer has windows 7, and I stand corrected after 6 months my computer already feels like it runs at half the speed as it did day one. Funny thing is I can't even install any program on it all I have on it is excel files.

I really don't have much bad to say about the OS itself. Do I like micorsoft being pushy about it? no but OS itself just works and is stable and is fast and doesn't seem to slow down over time.
 
Man why do people hate windows 10 so bad. I would trade it for no other OS. yea microsoft wants to push it down your throat but after using it I couldn't go back. Work computer has windows 7, and I stand corrected after 6 months my computer already feels like it runs at half the speed as it did day one. Funny thing is I can't even install any program on it all I have on it is excel files.

I really don't have much bad to say about the OS itself. Do I like micorsoft being pushy about it? no but OS itself just works and is stable and is fast and doesn't seem to slow down over time.

+1

It seems to run fine and takes a wile to turn a bunch of the "features" off, but if you want DX12, it's the only game in town.
 
My plan is to run windows 10 in a VM, using VFIO to forward a proper pcie video card to it for near metal like desktop performance/gaming. Hopefully on my new zen chip early next year, and if it isn't comparable to the intel chips then on my new 2066 skylake-x/labylake-x chip a little bit later that year.

Putting windows 10 in a nicely managed cage where it belongs, and being able to rapidly switch between OS's when i can do things without windows. No kernel mode nonsense/telemetry gunna sneak through my linux firewall, And I can automatically run hourly snapshots useful for rolling back from bad updates, or any other nonsense that tends to happen on the windows platform. I setup a test of this on an a10-5800k box with a gf 620 as the vfio device, worked pretty well. Needs extra memory for the host os, and the IO takes a bit of a hit( 10-20%~ or so depending on config). Tho ram is so cheap these days I wouldnt be shocked to find it in a happy meal or a box of Trix!
 
My plan is to run windows 10 in a VM, using VFIO to forward a proper pcie video card to it for near metal like desktop performance/gaming. Hopefully on my new zen chip early next year, and if it isn't comparable to the intel chips then on my new 2066 skylake-x/labylake-x chip a little bit later that year.

Putting windows 10 in a nicely managed cage where it belongs, and being able to rapidly switch between OS's when i can do things without windows. No kernel mode nonsense/telemetry gunna sneak through my linux firewall, And I can automatically run hourly snapshots useful for rolling back from bad updates, or any other nonsense that tends to happen on the windows platform. I setup a test of this on an a10-5800k box with a gf 620 as the vfio device, worked pretty well. Needs extra memory for the host os, and the IO takes a bit of a hit( 10-20%~ or so depending on config). Tho ram is so cheap these days I wouldnt be shocked to find it in a happy meal or a box of Trix!

I was never able to get it to work with Nvidia hardware. It is outright blocked in ESXi, and when I switched to KVM, there are supposedly hacks to make it work, but it is hit or miss if it actually does. With AMD GPU's - however - it is supposedly easy, and "just works".

Modern computers with fast SSD's boot really quickly though, so it doesn't bother me to reboot into Windows when I want to play a game. I'd rather wait the 20 seconds it takes, than put up with even a single digit percentage loss in performance a passed through GPU gets you, but that's probably because I run games at 4K, and hate SLI so I only use a single GPU. I need every little ounce of performance I can get.

If I had spare performance I might try it again.
 
Man why do people hate windows 10 so bad. I would trade it for no other OS. yea microsoft wants to push it down your throat but after using it I couldn't go back. Work computer has windows 7, and I stand corrected after 6 months my computer already feels like it runs at half the speed as it did day one. Funny thing is I can't even install any program on it all I have on it is excel files.

I really don't have much bad to say about the OS itself. Do I like micorsoft being pushy about it? no but OS itself just works and is stable and is fast and doesn't seem to slow down over time.

Well, I'm a confirmed Linux fan, so it would take an act of God to get me to switch back to Windows as my primary OS again.

That being said, Windows 10 runs great when I dual boot to it for games. It irks me that it has uninstallable Microsoft apps on it. It's like the bad old days before I had a Nexus phone, and had to put up with pre-installed bloatware I never wanted. If I could remove all the tablet apps, xbox integration, TV & movies app, the music app (forget what it's called) the Microsoft store, edge, etc. and completely disable any Ms data collection, and any cloud integration I'd stop complaining right away.

They really did a good job in making a smooth running OS, I just hate all the strong-arm bs to try to get you to use their services and collect your data.
 
I was never able to get it to work with Nvidia hardware. It is outright blocked in ESXi, and when I switched to KVM, there are supposedly hacks to make it work, but it is hit or miss if it actually does. With AMD GPU's - however - it is supposedly easy, and "just works".

Modern computers with fast SSD's boot really quickly though, so it doesn't bother me to reboot into Windows when I want to play a game. I'd rather wait the 20 seconds it takes, than put up with even a single digit percentage loss in performance a passed through GPU gets you, but that's probably because I run games at 4K, and hate SLI so I only use a single GPU. I need every little ounce of performance I can get.

If I had spare performance I might try it again.

I won't agrue that it can be a pain to setup, And yes it only works with either KVM or XEN, The Nvidia driver checks the hypervisor identity and if it is qemu/kmv/xen/esxi it won't work, oddly enough the driver only looks for those identities, so if you spoof your hyperviser ID to almost anything else it will work.

Nvidia is just being intransigent on this one, they simply can't block hypervisor functions, as windows 10 will use one by default in 64bit mode unless you disable it. They want to tier this type of function off so they can sell it as a feature in enterprise grade products.

Also if you setup 1gigabyte huge pages, along with the vfio drivers, the performance hit may actually be very minimal. Sub 1% potentially. And if you forgo the snapshot features, you might be able to gain IO performance through MD raid, and bcache with an nvme drive. Of course that assumes you have a few spare cores to throw around, so 2011-v3 i7 or e5 only area there atm.
 
Well, I'm a confirmed Linux fan, so it would take an act of God to get me to switch back to Windows as my primary OS again.

That being said, Windows 10 runs great when I dual boot to it for games. It irks me that it has uninstallable Microsoft apps on it. It's like the bad old days before I had a Nexus phone, and had to put up with pre-installed bloatware I never wanted. If I could remove all the tablet apps, xbox integration, TV & movies app, the music app (forget what it's called) the Microsoft store, edge, etc. and completely disable any Ms data collection, and any cloud integration I'd stop complaining right away.

They really did a good job in making a smooth running OS, I just hate all the strong-arm bs to try to get you to use their services and collect your data.

try this....
10AntiSpy - Windows 10™ Anti Spy
 
i think people are assuming way too much off this bench. with everything wrong in the frickin benchmark description. Seriously do you all expect it to be worse than current gen? I highly doubt it. Obvious whatever it is if it is zen sample it is running severely crippled according to what everyone is saying in that forum. May be AMDs way of finding the person if someone is leaking numbers.

Then again some people here will speculate anything and everything to prove "I told you so" argument when the product comes out, hoping they were right. But this benchmark shows nothing concrete and yet some are already guaranteeing single threaded performance.

From everything seen on high level architecture overview even reading anandtech articles, they expect significant ipc improvements. So I highly doubt it sucks so bad. I'll wait for real world tests to be done on the cpu before I call it a fail.

Will amd disappoint, I am sure they will find a way. If clock speed is the only thing holding back zen that will be disappointing but that can be fixed over time. I don't expect them to beat intel but I don't expect this to be another bulldozer the zen architecture says otherwise.
Strictly speaking, if it does run at 1.4Ghz, and after some verification it is entirely possible it does, it is easily up to 90% (!!!) IPC uplift over steamroller in some tests (though it's tricky to recall which one of tests is IPC heavy since i don't remember 20% of algos here and don't know other 80%).
That's simply about AMD being that far behind Intel.

EDIT: Woops, compared apples to oranges. Still, a solid from 0% to 110% improvement over Vishera in IPC.
 
Last edited:
Back
Top