i7-5820k vs. i7-6700K for sim gaming

Clock for clock the worst you're going to do with X99 is equal Skylake, but in certain scenarios X99 will our perform Skylake.

Yeah but that's the rub. Chances of an X99 setup being equal in clock to Skylake.

Statistically speaking 4.5-4.6 is the upper limit on X99 with high-end air/water. And 4.6-.47 is the low end for skylake with the same cooling.

You're much more likely to end up at 4.4ghz on X99 and 4.8ghz on Skylake.

So even if you are clock for clock the same, which is something that could be disputed since skylake does show IPC improvements in the 5-10% range for many benchmarks, You're still likely going to be at a 5% clock disadvantage.

So Skylake is likely going to be 7-15% faster single threaded v X99 in the real world.
X99 will still be better when cores can come out to play, soooooo....

Anyone who says their is a clear winner doesn't understand the problem or has a specific use case.
 
Why are you telling me to get out of the thread when you're not even discussing the core issue of X99 vs Skylake for gaming. Clock for clock the worst you're going to do with X99 is equal Skylake, but in certain scenarios X99 will our perform Skylake.

No, Skylake has a faster performance/clock. Read the [H] review there buddy.

And also see here (the only CPU-limited test Tech Report performed, the idiots):

http://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/6

And here we see a whole shitload of tests that don't use more than four cores:

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/21

ALL OF THOSE GAMES ARE RECENT! They give a good picture of how many cores current game engines are utilizing.

Also, the 5820k has a lower overclock limit than the 6700k:

http://hwbot.org/hardware/processor/core_i7_5820k/

http://hwbot.org/hardware/processor/core_i7_6700k/

Those are early results, but signs are pointing to 300+ MHz higher clock speed.

For games that use 4 threads or less, you'll have 20-25% higher performance with Skylake.

For games that use 6 threads, you'll have similar performance between the two, with the 5820k taking a small lead (6700k will scale a little from the 4 extra threads, while 5820k will scale 50%)

Only when you exceed 6 threads will the 5820k really shine (like in Welcome to the Jungle). But that's not a common thing in games today,, and probably won't be for some time.
 
Last edited:
Anyone who says their is a clear winner doesn't understand the problem or has a specific use case.
DING DING DING!

:D

Maybe this is why so many of us struggle with the decision. I only have 2 issue with 6700k
1. the TIM, they should have soldered it, though you could delid and put some decent stuff under it if you are brave, or pay these guys $50 to do it for you, but its not really worth it, it just annoys me.

2. I can't buy the damn thing!
3. I can't buy the damn thing!

you get the point
 
Correct on the first part. Second part you are explaining what will make the Z170 platform very confusing, they have many options, different chips to use and different configurations. Also I never used the word significant when defining the use of the PLX chip, it should be slower but no one knows yet how slow.

There's actually quite a few reviews out already that have tested Z170 mobos with M.2 & PCI-E SSD... Not sure what part I was correct on the first half either since I posed two questions that are kinda diametrically apart. ;)

Just don't see Thunderbolt catching on after so long, but I've no idea how the similarly functional USB alt modes will really be implemented at this point. If we're gonna talk confusing, X99 can be just as tough to figure out with some CPU providing more lanes and different mobos handling the 5820K lane count differently...

You could make the argument that it's more future proof because at least there's always the option of buying a (pricey) CPU that provides more lanes, but depending on usage case Z170 may very well be more versatile since all PCH lanes are 3.0 unlike X99's 2.0.

Shoot, the basic layout of most Z170 mobos favors my CF + sound card setup and I can still slot in PCI-E drive on the bottom most slot... With most X99 boards something is getting sandwiched between two GPU no matter what, that's often overlooked.

I think the whole thing is less black and white than many try to make it seem. Even when you start talking games vs "work", there's common work tasks that still benefit more from clock speed than 6+ cores, just like there's games at both ends of the spectrum.
 
Last edited:
There's actually quite a few reviews out already that have tested Z170 mobos with M.2 & PCI-E SSD... Not sure what part I was correct on the first half either since I posed two questions that are kinda diametrically apart. ;)

Just don't see Thunderbolt catching on after so long, but I've no idea how the similarly functional USB alt modes will really be implemented at this point.

The main thrust of this will be for laptop usage. Where they'll be able to drop 2 of them on the side of the chasis and get a huge array of "docking" options off a one cable connection.

Want a 1GB ehternet, charging, 4 port USB 3, monitor connection? Cool use this. Because that will drive the industry to the type-C, 3.1 connector having it on the desktop seems like a good future proofing idea. I'd assume that adding an alpine ridge chip will move the mobo cost ~20-25, that's something I'd pay for.
 
1. the TIM, they should have soldered it, though you could delid and put some decent stuff under it if you are brave, or pay these guys $50 to do it for you, but its not really worth it, it just annoys me.

2. I can't buy the damn thing!
3. I can't buy the damn thing!

you get the point

4. I can't buy the damn thing!
5-10. I can't buy the damn thing! :p

Paying someone to delid and/or pre-bin an OC for me just seems so wrong, plus it takes the fun out of it! I'm surprised they only charge $50 for that tho.
 
The main thrust of this will be for laptop usage. Where they'll be able to drop 2 of them on the side of the chasis and get a huge array of "docking" options off a one cable connection.

Want a 1GB ehternet, charging, 4 port USB 3, monitor connection? Cool use this. Because that will drive the industry to the type-C, 3.1 connector having it on the desktop seems like a good future proofing idea. I'd assume that adding an alpine ridge chip will move the mobo cost ~20-25, that's something I'd pay for.

Right, I agree that'll be huge for laptops... But Thunderbolt aside, is there any actual info that says Alpine Ridge is able to supports 3.1/Typ C modes that other 3.1 controllers can't? Because there's a lot of different things competing towards the same goal here...

Anything involving TB will require an Intel controller, but we also have DisplayPort over Type C alt modes that have nothing whatsoever to do with TB... The ultimate laptop solution will be one that does display, USB, AND power over Type C, but it seems OEM aren't falling over themselves to make that happen.

I don't see how the docked usage model there really ends up mattering on my desktop either... Long run it might be cool to be able to actually charge everything, even a laptop, off specific high power Type C ports but that's gonna require much beefier power delivery not just a special controller.
 
Another big one from alpine ridge is the HDMI 2.0 with HDCP 2.2 out of the type-c connector.

HDCP 2.2 is going to be required for HDR and 4K bluray. Netflix won't let output a 4K stream without HDCP 2.2.
 
None of these gaming benchmarks have yet compared 1440p or 4K.

Bet any "differences" between the 5820k and 6700k quickly vanish then.
 
Paying someone to delid and/or pre-bin an OC for me just seems so wrong, plus it takes the fun out of it! I'm surprised they only charge $50 for that tho.

I don't have the ability to do the binning, so it's like paying more for a winning scratch off ticket. I can get behind that.

But yeah personally I like doing things like delidding myself. It's the fun part for me as well.
 
Yeah but that's the rub. Chances of an X99 setup being equal in clock to Skylake.

Statistically speaking 4.5-4.6 is the upper limit on X99 with high-end air/water. And 4.6-.47 is the low end for skylake with the same cooling.

You're much more likely to end up at 4.4ghz on X99 and 4.8ghz on Skylake.


Where'd you get the idea that 4.6-4.7 is low end for Skylake. I've not seen any evidence that 4.8-5.0 will the average Skylake overclocks. And 4.4-4.5 is actually average OCs for X99.
 
Where'd you get the idea that 4.6-4.7 is low end for Skylake. I've not seen any evidence that 4.8-5.0 will the average Skylake overclocks. And 4.4-4.5 is actually average OCs for X99.

browsing forums, reading review, checking websites.
http://hwbot.org/hardware/processor/core_i7_5820k/

I'm not sighting hwbot.org as the bible, and an average has both higher and lower results. But I think 4.5 is about the max you can expect on high-end air. 4.4 is crappy, 4.6 is a great result and anything beyond that is hitting the lottery.

OTOH, every review hits 4.7ghz for their skylake and their are handful of people with that have chips that are doing 4.8, 4.9 after delidding.
 
browsing forums, reading review, checking websites.
http://hwbot.org/hardware/processor/core_i7_5820k/

I'm not sighting hwbot.org as the bible, and an average has both higher and lower results. But I think 4.5 is about the max you can expect on high-end air. 4.4 is crappy, 4.6 is a great result and anything beyond that is hitting the lottery.

OTOH, every review hits 4.7ghz for their skylake and their are handful of people with that have chips that are doing 4.8, 4.9 after delidding.

http://www.overclock.net/t/1510388/haswell-e-overclock-leaderboard-owners-club/10850

Majority of these are 4.4 and above. Real life examples. Skylake the sample size is much smaller. 4.7ghz is reasonable, but you made it sound like a much larger swing than it is. You were saying H-e high end is 4.5 vs Skylake low end 4.6, when in reality it's more like. a 4.5 vs 4.7 averages. With skylake high end being a bit unknown.
 
http://www.overclock.net/t/1510388/haswell-e-overclock-leaderboard-owners-club/10850

Majority of these are 4.4 and above. Real life examples. Skylake the sample size is much smaller. 4.7ghz is reasonable, but you made it sound like a much larger swing than it is. You were saying H-e high end is 4.5 vs Skylake low end 4.6, when in reality it's more like. a 4.5 vs 4.7 averages. With skylake high end being a bit unknown.

I don't think anyone who has delidded is doing less than 4.8ghz. So, there is that.

Until we can actually buy the damn things it's all a bunch of guess work. I just wish the gawd damn things would show up.
 
Just to confuse things even more, and watch you melt into a puddle.

http://i.imgur.com/g3Qacog.jpg

Why does Skylake lose to X99 in almost every game here?

Not for nothing because I think it's irrelevant either way given the tiny differences... But you might wanna take another look, 5820K came out on top on 4 out of 7, not exactly "almost every game".

They bolded the overall winner, not just the winner between 6700/5820, might be what threw ya off. You could say it's 3 to 5 if you count the Heaven benchmark, but then again, the difference in Batman was literally 1fps.

Heck, the difference in almost every game going up against even the 4790 is little more statistical noise. Edit: 5775C is the real winner there, overall. :p And showing minimum fps might make it more interesting.
 
Not for nothing because I think it's irrelevant either way given the tiny differences... But you might wanna take another look, 5820K came out on top on 4 out of 7, not exactly "almost every game".

They bolded the overall winner, not just the winner between 6700/5820, might be what threw ya off. You could say it's 3 to 5 if you count the Heaven benchmark, but then again, the difference in Batman was literally 1fps.

Heck, the difference in almost every game going up against even the 4790 is little more statistical noise. Edit: 5775C is the real winner there, overall. :p And showing minimum fps might make it more interesting.


Yeah I dont really care so much about that benchmark as you can easily find ones that show Skylake ahead by a bit. Point I'm trying to get across is that Skylake offers nothing over Haswell-e in performance clock for clock, and at the same time you get 2 less cores for about the same price.

Biggest thing Skylake has going for it is that it's IMC is way more beefy and consistent.

Ultimately it's dumb we should even have to argue between these two platforms. 6 cores, high memory bandwidth, and 5ghz should be standard by now. I guess Skylake-E will be the real champion.
 
"For about the same price" is relative, not everyone has a MC next door or has caught a deal for a sub $200 X99 mobo... And if you're in a place where neither is a reality the difference is like $130-240 (vs Sky i7/i5).

There's other things to weigh up besides two extra cores you may or may not use too. It's just not that black and white. I wish hexa+ was the standard by now, but with no competition and no mass market demand you can't really expect more.
 
multi core is just good for video editing thats it.

go skylake and save electricity and have more power in games
 
For me, I went with the 5820K because I believe even with a mild overclock to 4GHz, it should be fast enough for gaming. Although Skylake may be faster technically, if I'm not going to see the benefits anyway, it's kind of pointless. I don't run my games at 3 digits fps, and so most of the times I'm either limited by GPU performance, or simply not fully utilizing anything due to the 60fps cap.

With that in consideration, 5820K is a better option IMO as DX12 will take advantage of the additional cores in the future, if we ever get to such demanding situation within the useful lifespan of these current chips.

The PC in your signature is almost exactly what I want to build. I would like to chat about it with you sometime.
 
At 1440p and above, not true. Probably not even at 1080p with max settings.

you will see a huge improvement with emulation with skylake up to 30% . In the long run skylake is the better investment unless you are a video editor and dont care about your electric bill running 6-8 physical cores . Skylake is supposed to take advantage of direct x 12 in the future with better optimization. Other than that haswell-e is a waste of money unless you do video editing for living.
I am sure once skylake-e comes along it will be a much better processor with lower power consumption
 
you will see a huge improvement with emulation with skylake up to 30% . In the long run skylake is the better investment unless you are a video editor and dont care about your electric bill running 6-8 physical cores . Skylake is supposed to take advantage of direct x 12 in the future with better optimization. Other than that haswell-e is a waste of money unless you do video editing for living.
I am sure once skylake-e comes along it will be a much better processor with lower power consumption

Not arguing the un-utilized cores or power draw....but have a hard time believing any real world gains at 1440p+ with any Intel i5/i7 cpu over the past 3-4 years.
 
There's actually quite a few reviews out already that have tested Z170 mobos with M.2 & PCI-E SSD... Not sure what part I was correct on the first half either since I posed two questions that are kinda diametrically apart. ;)

Just don't see Thunderbolt catching on after so long, but I've no idea how the similarly functional USB alt modes will really be implemented at this point. If we're gonna talk confusing, X99 can be just as tough to figure out with some CPU providing more lanes and different mobos handling the 5820K lane count differently...

Sorry was on mobile so I was both rushing and trying to respond on my cell phone. :cool:

Either way, I had no intention of debating the uses of Thunderbolt and if it still has merit with the PC or not. As it stands USB 3.1C with a controller that can pass TB signals is its only hope of being adopted. As for other chips, most only offer basic USB3.1 connectivity, if there are others that can switch signals I don't know and to tell you the truth I don't think they will be as good as Intel's Alpine Ridge.

As it stands the X99 only really has 2 configurations and that's CPU dependent, but you'll soon see how confusing boards can be with the Z170. Honestly I think if anyone is going to confuse us all its going to be MSI, I still have trouble figuring out the differences between their Gaming # series :confused:

You could make the argument that it's more future proof because at least there's always the option of buying a (pricey) CPU that provides more lanes, but depending on usage case Z170 may very well be more versatile since all PCH lanes are 3.0 unlike X99's 2.0.

Shoot, the basic layout of most Z170 mobos favors my CF + sound card setup and I can still slot in PCI-E drive on the bottom most slot... With most X99 boards something is getting sandwiched between two GPU no matter what, that's often overlooked.

I think the whole thing is less black and white than many try to make it seem. Even when you start talking games vs "work", there's common work tasks that still benefit more from clock speed than 6+ cores, just like there's games at both ends of the spectrum.

You are referring to DMI 3.0, which benefits SATA and the m.2 slot if its attached to the PLX chip. I have high hopes for DMI 3.0 being proliferated to hopefully X99, if not I can wait for X179.

Honestly the way I see the Z170, I think I favor it a bit more than the X99. I know a lot of people like the X99 more but they are two very great chipsets. With games or work the CPU is honestly a mixed bag, to me the chipset matters most as having sufficient I/O options and connectivity will be far greater than a CPU that will be upgradable. Hell I'm doing 99% of my work on an OC'd G3258! :D (this will be replaced by a 5775C damnit...)
 
you will see a huge improvement with emulation with skylake up to 30% . In the long run skylake is the better investment unless you are a video editor and dont care about your electric bill running 6-8 physical cores . Skylake is supposed to take advantage of direct x 12 in the future with better optimization. Other than that haswell-e is a waste of money unless you do video editing for living.
I am sure once skylake-e comes along it will be a much better processor with lower power consumption

DX12 is not a bag of magic fairy dust for the CPU like a lot of posters are saying it is. Most of what it will benefit is if the developer decides to take advantage of lack of limitations now. Same with the iGPU support that can help render scenes, if its not enabled it won't work. There are a lot of ifs here, much like when DX9C was released. It took a good while before we noticed the benefits of 9C and by then we were using DX10 video cards.
 
DX12 is not a bag of magic fairy dust for the CPU like a lot of posters are saying it is. Most of what it will benefit is if the developer decides to take advantage of lack of limitations now. Same with the iGPU support that can help render scenes, if its not enabled it won't work. There are a lot of ifs here, much like when DX9C was released. It took a good while before we noticed the benefits of 9C and by then we were using DX10 video cards.

either way haswell-e is old news
 
The platform is ~1 year old and already had all the "new" stuff that z170 is implementing.

It has most of it, but you will not have Alpine Ridge so no USB 3.1 Thunderbolt, I don't mean to nitpick here, but some people really do care about this.
 
Back
Top