After 64 Cores?

Epyon

Gawd
Joined
Oct 25, 2001
Messages
930
After trying out GPU 3d rendering I am finding that it is still garbage. So I am really looking forward to the 64 core TR. My question is when they drop down yet another node to say 5 or 3 are we just going to see more cores or are they going to do something different? What is the cut off as far as cores go do you think? I only ask because things seem to be moving hela fast. I guess i was used to intel keeping us at 4 cores for 10 years
 

Mega6

2[H]4U
Joined
Aug 13, 2017
Messages
3,556
Sometimes the sky seems black but then blue. We are not going to see less cores, I know that. What specifically is the question?
 

Ready4Dis

2[H]4U
Joined
Nov 4, 2015
Messages
2,483
More cores, then probably specialized hardware, like raytracing intersection cores, more avx instructions, custom instructiins for specific work loads, more ai integration, etc. Wonder if they'll eventually move to different cores ;i of 2-4 ccx's with the same instructions in each, possibly one arm core or similar, a few x86 cores another for ai, etc all in a single package. Who knows we have a few years, maybe Google's Supremacy on their quantum computer will be extended to something useful.
 

Ready4Dis

2[H]4U
Joined
Nov 4, 2015
Messages
2,483
Oh, and core cut off is a relative term. Desktop, workstation, server, ? Real cores like x86 full cores or would other hardware that can do work (like an ai core, or some specific scientific core, possibly a lower power cord for energy savings on minimal loads of for security or sorts). I think as we get smaller it will be harder to remove heat from the density of the CPU. Maybe a better infinity fabric and i/o bridge to allow further separation to aide in cooling, maybe AMD/Intel will start going 4 smt per core instead of 2, so maybe see something like a 64/256 for a desktop CPU in 8 years?
 

Epyon

Gawd
Joined
Oct 25, 2001
Messages
930
Core cut off for desktop is what i would be wondering. My 3900x finished a render is 2 hours. I know its not a perfect scale. 12c=2 hours 24c=1 hour 48c= 30 min. I see now. I don't SM4 would help much. Well, you did give me ideas on how they might go. I mean if they can keep going cores with out losing to much performance that would be something.
 

DrLobotomy

Supreme [H]ardness
Joined
May 19, 2016
Messages
6,736
Time to start stacking those cores so we can get 640 core CPU's. CPU's could be a little thicker anyways.
 

tangoseal

[H]F Junkie
Joined
Dec 18, 2010
Messages
9,317
More cores is just an expense if you're not using them because of software limits.

A 64 core will let you run multiple 8 core instances for example.

But it's not going to allow you full 128 threads on one instance with 100% scaling. Just doesnt work that way.

Good luck, dont waste your money on misubderstanding. Get a 3900x and be happy.
 

IdiotInCharge

NVIDIA SHILL
Joined
Jun 13, 2003
Messages
14,679
Well start seeing CPUs broken into ARM -Like clusters of big general purpose cores and little cores, specialising in different hardware acceleration.

Basically the chiplet idea, but with some complexes made up of specialized circuitry in place of the general x86 cores they're using now... probably with an ARM core in there to coordinate stuff (or perhaps RISC V someday).
 

Derfnofred

Gawd
Joined
Dec 11, 2009
Messages
606
Scaling for a huge number of computational problems is, especially for a single user, is generally not kind. So for a single user, big, flexible, superscalar general purprose cores are definitely moving into marginal gains territory. That's even with demanding software like CAD/FEA, which have a lot of single path dependencies, where a high clocked 8 core with absurd memory bandwidth (and as low of latency as you can get) will outperform more cores (for a single solution; batching changes that arithmetic). Ahmdahl gets the last laugh.

There are certain special workloads that benefit *tremendously* from dedicated hardware/coprocessors. Higher density = more silicon to allocate to those roles. Higher density = more room for cache, especially L3 (are we going to see L4 chiplets/stacks?!). Higher density = more need for heat spreading to lesser used parts of the chip, so a smaller and smaller percentage of the overall silicon dedicated to GP cores and their higher power requirements.

So, for single-user machines wanting to do single computational problems at a time, huge core counts don't really make sense. More dedicated hardware to specific workflows and better programming to leverage said hardware looks the way forward.
 

PliotronX

2[H]4U
Joined
Aug 8, 2000
Messages
2,070
We might see slowing or stagnation in cores and SMT focus on thread count. For example Zen 4 is slated to use 4 threads per core.
 
Top