I welcome our new 16 core, on a maintstream socket, overlords.......I'm sure they can put me to good use!
Fixed: Learn to speak correctly to overloards.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I welcome our new 16 core, on a maintstream socket, overlords.......I'm sure they can put me to good use!
Fixed: Learn to speak correctly to overloards.
Don't worry, AMD will have a lower core count CPU that will be better for gaming than this 16 core. Don't hate on a 16 core CPU just because "average home users" can't use the full potential of it, that's dumb. AMD has a CPU for them too!
I welcome our new 16 core, on a maintstream socket, overlords.......I can sure put it to use!
Pointing out the diminishing returns is not "hate".
Bring on the core wars, as it should help lower the price of 6-8 core CPU I want, though not as fast as many would expect.
Now if we could only get games to be better optimized for more cores!
Now if we could only get games to be better optimized for more cores!
Lol!
You call me ignorant...
Windows 10 would like a word with you....(updates, windows defender/other AV, onedrive starting up, who knows what else.)
My personal system doesn't have this issue mostly since I minimize/get rid of that crap. Work systems however...
I always marvel when imaging a new PC with Win10 on how updates, onedrive updates and windows defender can take so much CPU power as to almost cripple a dual core and make a i5 quad struggle. PCs with really fast SSDs love to have CPU power as Storage has always been far and away the most bottlenecking component as far as user percieved responsivness.
Like Crysis 3, that works great with 8 cores like the FX had. Oh well, things will eventually change, just going to have to give it time.
Windows 10 on an i7 2640M, which is a dual-core hyperthreaded Sandy Bridge mobile CPU maxing out at 2.8 ghz on Windows 10 with a 480gb SSD and 8gb RAM. With Kaspersky, Skype, Teamviewer, Discord, Steam, and so on. Takes about 1 minute from cold boot to desktop, and maybe another 30 seconds to opening Chrome. Used primarily for web browsing, Youtube, and some lightweight gaming. I have not experienced background tasks causing my games to lag, so I highly doubt a modern desktop quad-core is going to get bogged down by background tasks on a home system. Business is a different story entirely, they still haven't figured out how to put SSDs in most of their laptops.
It's easy to fill out a core when you're doing unoptimized code that doesn't add much to the experience. I can max things out easily with PhysX calculated on the CPU.
Things can change, but as stated, I don't see programming fundamentally changing unless our processors fundamentally change, i.e. switch to quantum computing. The same old silicon design is going to have the same old restrictions.
I'd say that my PC spends more time compressing pictures and videos than playing games, but that's mostly because it does it while I'm not present. Does it count as "time with a computer" if I'm not there?
l
Memory bandwidth will double, power consumption will drop and latencies will rise making the bandwidth gained a moot point in the short term.
To expand, as I haven't seen anyone explore DDR4 vs. DDR5 yet, and absolutely not attempting to correct Dan:
Do I think latencies will rise? yerp, do I think that will make a difference? nope.
Anandtech just did a good comparison of the 2600k vs modern quad cores and an eight core 9700k. Further reinforces my view that CPU core counts above 8 may go mostly unused for years.
https://www.anandtech.com/show/1404...el-core-i7-2600k-testing-sandy-bridge-in-2019
No- cause if it's doing it in the background or when you're not present, it doesn't need to be fast, overnight compression etc does not need to be fast as it's not time limited. 6 cores would adequately service your needs.
I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.
I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.
When my "overnight" encoding tasks equal 12-24 hours and I only have 8 hours in a night to get it done, yes, I do need faster because it is time limited.
The "fanboy hate" hyperbole is total nonsense. The only true brand loyalty anyone rational has is to performance, as it relates to their type of usage. Video transcoders are outliers.I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.
Right - fringe case. 6 vs 8 cores, assuming everything is equal, for encoding you'll end up with 25% more performance. 8 vs 24 hours is 3x.
What I'm seeing from here: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/8 is that for encoding, AVX (2/512) is king, and even 50% more cores (threadripper/12 core) doesn't beat clock speed.
So take-aways from the 1080p HVEC chart:
1. The 8086k and 8700k beat out the 2700x and threadripper, with fewer cores.
2. 6->8 cores doesn't scale linearly on AMD (see 2600x to 2700x)
3. 6->8 cores on intel does scale linearly.
4. Hyperthreading gives you 10%.
My earlier post suggested 1.87x bandwidth differences between release DDR5 vs DDR4-3200.
Do I think latencies will rise? yerp, do I think that will make a difference? nope.
Modern CPUs have a lot of prefetching and large caches which is why latencies don't matter a whole hell of a lot.
6 vs 8 vs 1,050,667,889,999 cores. Whats the damn difference if all you guys are only using software that support 6 cores max like Handbrake?
link to Anands is using what was free and easy. Handbrake LMAO.
Im just saying eat a box of rock salt when you reference handbrake as the defacto standard of encoding performance.
H265 can support gobs of cores if its utilized properly. Not just 6.
The "fanboy hate" hyperbole is total nonsense. The only true brand loyalty anyone rational has is to performance, as it relates to their type of usage. Video transcoders are outliers.
AMD releasing CPUs with more and more 8core chips glued together is impressive, but if they ever release something that beats Intel's single threaded IPC performance then we'll see perceived Intel loyalties quickly abandoned.
I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.
6 vs 8 vs 1,050,667,889,999 cores. Whats the damn difference if all you guys are only using software that support 6 cores max like Handbrake?
link to Anands is using what was free and easy. Handbrake LMAO.
Im just saying eat a box of rock salt when you reference handbrake as the defacto standard of encoding performance.
H265 can support gobs of cores if its utilized properly. Not just 6.
An even easier way is to open another instance of handbrake...Yes you can do that!
I use vidcoder since it does that automatically.
With filters and such I find I have to open 3-4 x264 encodes to max out my 1700, and 2 for x265 encodes.
No- cause if it's doing it in the background or when you're not present, it doesn't need to be fast, overnight compression etc does not need to be fast as it's not time limited. 6 cores would adequately service your needs.
Right - fringe case. 6 vs 8 cores, assuming everything is equal, for encoding you'll end up with 25% more performance. 8 vs 24 hours is 3x.
What I'm seeing from here: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/8 is that for encoding, AVX (2/512) is king, and even 50% more cores (threadripper/12 core) doesn't beat clock speed.
So take-aways from the 1080p HVEC chart:
1. The 8086k and 8700k beat out the 2700x and threadripper, with fewer cores.
2. 6->8 cores doesn't scale linearly on AMD (see 2600x to 2700x)
3. 6->8 cores on intel does scale linearly.
4. Hyperthreading gives you 10%.
You must be targeting fairly low resolution for your encodes.
As I explained above(#224), it's the nature of Video encoders to have a minimum block size. If you encode to a lower resolution, you get fewer blocks, and you can utilize fewer threads.
Benchmarking encoders should really uses 4K files these days.
1080p CRF22 blu ray for x265 and whatever dvd is for x264 (480p?).
1080p must be too low of a rez to use 100% of 8 cores. (it uses about 90%)
Unlikely, they would get slaughtered if they did that it is more likely that they will just increase the amount of memory available internally to make up for any speed short falls to act as a sort of buffer. The threadripper 16C/32T didn't really show any problems in that department and I think it unlikely they would introduce it in a further iteration. I do think though that they are not actually going to get the leaked clock speeds out the gate and we will see those in later product refreshes at a later date.So is this AMD Ryzen 9 16 core CPU (and really, all upcoming Zen2 CPUs) going to have really poor latency to memory, thus causing an overall slowdown for tasks that do not fit into L1/2/3 cache, as compared to Zen+?
I can see how Cinebench R15 and R20 might allow AMD Zen2 to look super fast and efficient, such as what we saw at CES, but now I'm beginning to worry that in actual real-world performance Zen2 8C/16T CPUs will be slower than the 2700X in certain tasks simply due to the multi-die architecture and the memory latencies introduced within. The faster per-core speed and IPC boosts that Zen2 offers may allow for an overall speed boost compared to he 2700X, but if the Zen2 were artificially clocked at 2700X speeds and compared to the 2700X, it would be slower. Plausible?
So is this AMD Ryzen 9 16 core CPU (and really, all upcoming Zen2 CPUs) going to have really poor latency to memory, thus causing an overall slowdown for tasks that do not fit into L1/2/3 cache, as compared to Zen+?
I can see how Cinebench R15 and R20 might allow AMD Zen2 to look super fast and efficient, such as what we saw at CES, but now I'm beginning to worry that in actual real-world performance Zen2 8C/16T CPUs will be slower than the 2700X in certain tasks simply due to the multi-die architecture and the memory latencies introduced within. The faster per-core speed and IPC boosts that Zen2 offers may allow for an overall speed boost compared to he 2700X, but if the Zen2 were artificially clocked at 2700X speeds and compared to the 2700X, it would be slower. Plausible?
It's an unknown, though it makes some sense that AMD wouldn't have used chiplets on the desktop if it really introduced a significant latency. For server type workloads it hardly matters, but for real time desktop usage (games) it can matter quite a bit.
AMD will have a relatively large cache in the I/O chip that should help a fair bit. It will really be interesting to see how this design works out.
You didn't read your own post. Before sperging out at me and calling me a child, maybe read your own post first that I was referring to? I operated with four threads in windows for 'office editor' functions amongst others with an I3, it sucked ass with documents containing images and was not sufficient for the job. That's the point I tried to make - your assertion is incorrect in my experience.
But yes I agree with you saying that gamers don't need 16+ threads. 8 is enough for now. In future, maybe not though.
Edit to add the I3 I used was same speed clock-wise as my 2600k at stock. 2600k has a small OC around 4.2 or so for stability. The biggest difference was threads (maybe cache?) and it was night and day.
No, they are not "perceived Intel Loyalties" but real ones. There are folks here that will not abandon Intel no matter the reason.
Wow, now Intel is affected by some new found vulnerabilities and AMD is not affected by them. In fact, Intel is recommending turning off hyper threading on anything processor older than the 8000 series. Seriously, you cannot make this stuff up. This is another plus for AMD but they cannot sit on their laurels, they need to take advantage of this and not stop kicking just because Intel is on the ground. This 3000 series from AMD may be an even bigger deal because of this issue.