Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
It's an older game but it's possibly some indication of how the CPUs will perform at 1080P and there's a good chance the Ashes/Win10 or 11 has no idea how to leverage the other Cores on the CPU or it's just defaulting to the single 3D Cache Die. I wonder if this will be an issue on the 7900 parts, the X900 parts have typically been the highest end gaming and productivity performance. I'm wondering if the fact the the die on these chips is a 6 core if it will negatively impact performance...I think the benchmark is no longer considered useful with modern hardware, but it’s kind of a running joke to post scores before NDAs expire for new hardware.
Yeah, I saw this and the 7900X3D scored lower than the 5800X3D did in the same benchmark. Only way that's possible is if the benchmark is only using the 6 Core 3D Cache unit and thread scheduling is not working properly. The 7900X3D got a score of 9000 and the 5800X3D got a score of 12000...
There will be some issues to work out before these things are dialed in.
In the majority of games, 6 core won't be an issue. The 7600x is barely worse in the majority of gaming, than the higher core CPUs.It's an older game but it's possibly some indication of how the CPUs will perform at 1080P and there's a good chance the Ashes/Win10 or 11 has no idea how to leverage the other Cores on the CPU or it's just defaulting to the single 3D Cache Die. I wonder if this will be an issue on the 7900 parts, the X900 parts have typically been the highest end gaming and productivity performance. I'm wondering if the fact the the die on these chips is a 6 core if it will negatively impact performance...
They probably thought about it but after the benchmarks came out about how the 7600X was no slower in games than the 7800X I suspect they went for profit margin over value to gamers. I recall the 5600X had issues with Cyberpunk initially and the 5800X did not because the game was looking for 8 physical cores. They patched it and that resolved the issue, however, i am wondering if it was something similar with the ashes bench.In the majority of games, 6 core won't be an issue. The 7600x is barely worse in the majority of gaming, than the higher core CPUs.
A 6 core X3D will fly. I wish they had done one, by itself.
While I want to agree with what uOpt is saying... I wonder if there isn't something baked into the chips that tells the OS to use the Cache Module based Cores. No idea how AMD is handling thread ordering and such on these chips. The built in scheduler in Windows 11 plays nice with Intel Big / Little Cores so.... there is software out there that knows how to use and leverage the cores to their best advantage. So, maybe it's just that there isn't a scheduler out there that recognizes the new chips yet. That's my guess.
I don't think capturing L2 and L3 hits and misses and passing this info on to the OS scheduler is terribly difficult.How would you know whether the big cache die is faster or the higher turbo one (for any given thread)? Optimized scheduling becomes a big challenge here.
I don't think capturing L2 and L3 hits and misses and passing this info on to the OS scheduler is terribly difficult.
Pulled that 99% number out of your butt quick. The hit rate depends on the size of the L2/L3 actually. Those threads with the higher hit rates are deemed more cache dependent and are moved to the large cache CCX.99% of software running on these will have L2 and L3 misses though. It's a question of if those catch misses, or a lower effective clockspeed hinders any given app more than the other.
Pulled that 99% number out of your butt quick. The hit rate depends on the size of the L2/L3 actually. Those threads with the higher hit rates are deemed more cache dependent and are moved to the large cache CCX.
Let me qualify - Hit rate is dependent also on how the thread is actually using data as well.Your first statement is false, but your second statement is possibly true.
I don't think capturing L2 and L3 hits and misses and passing this info on to the OS scheduler is terribly difficult.
Pulled that 99% number out of your butt quick. The hit rate depends on the size of the L2/L3 actually. Those threads with the higher hit rates are deemed more cache dependent and are moved to the large cache CCX.
BTW - yeah, cinebench has gains in the 5800x3d vs 5800x
View attachment 549730
Hence the reason for the scheduler to balance processes.That's one way. Default to scheduling everything to the high-clock cores and let the CPU tell you when one thread looks like it would be btter off with the large cache.
But now think about machine load: if the machine is heavily loaded the turbo will decrease, whereas the big cache will stay effective. Now what? The whole equation changed, now the high-cache cores are relatively more attractive.
You are correct, instruction cache is irrelevant, my mistake. I will remove it and notate.That is the instruction cache, though, which is a L1 cache and not relevant to the caches discussed in this thread.
Hence the reason for the scheduler to balance processes.
1. AMD says it's not as complicated to get this working, compared to big/little
2. There is a driver aspect to this. It's not only CPU and scheduler. I feel pretty certain they will profile specific games in the (chipset) drivers. And lasso processes with that.
1. AMD says it's not as complicated to get this working, compared to big/little
Well, yes, but how does the scheduler know that the high-turbo cores are less attractive at any given point in time (under all-core load)? If it knew it would use more high-cache core scheduling. But it doesn't know, only the CPU knows what the max turbo is.
So we need two infos:
1) CPU tells the scheduler which threads have more cache misses
2) CPU tells scheduler what the expected max turbo frequency is
But the scheduler doesn't have absolute performance numbers, it still has to guess what amount of re-balancing to the high-cache cores is appropriate.
Also consider a single-thread workload. How do you determine whether the current cache hit rates warrant high-cache or high-clock CPUs? This is easier with multiple threads as you can compare them to each other and just put those with worse hit rates on the cache cores.
Even then, if you have 4 threads and the CPU told you which have more cache misses, how do you know it is best to put 2 on high-cache and 2 on high-clock cores? The whole shebang could run faster with everything on one kind of core.
A higher cache hit rate deems the thread cache heavy.
A thread with a low cache hit rate will probably not increase it's hit rate by increasing it's cache vs a thread with a high hit rate. At least as far as the cache sizes we are talking about here for the CPU. (144MB and less L2/L3)Actually a higher cache miss rate would deem a thread more cache heavy. A higher hit rate would indicate that the given thread has a sufficient amount of cache at it's disposal.
A thread with a low cache hit rate will probably not increase it's hit rate by increasing it's cache vs a thread with a high hit rate. At least as far as the cache sizes we are talking about here for the CPU. (144MB and less L2/L3)
If you have access to a microcenter, that $600 combo with mobo/ram is really, really hard to ignore.but I checked out with a pretty hefty pcpartpicker list last night and decided to go the route of 7900X
I don't but I had a friend who got me a "deal" on 2 components that made the rest of the build worth it. I'm sure the 7900X will be more than fine, I just haven't built since 2019 and I haven't built an AMD machine since...holy shit, Athlon XP. But it seems like newer architecture comes not needing / able to really do anything related to OC'ing. I've had my 9900k sitting at 5ghz for 3 years without a problem. I truly upgraded because I was tired of 3090 temps turning my room into a sauna, hoping the 4090 and a bigger case with a lot more airflow will do the trick. That being said, I see while the newer GPU's seem to have "fixed" what the previous gen had a problem with (in the form of these massive heatsinks), it seems the newer CPU's run a lot hotter at load.If you have access to a microcenter, that $600 combo with mobo/ram is really, really hard to ignore.
I wouldn't switch just yet.Running a decent tuned 5950x / b-die DDR4 3200@3600 CL14 (mostly stock) setup, so likely no given the complete platform overhaul. Still interested and if the performance delta is there... maybe?
If you want a cooler room, you need to reduce the power you're pulling from the wall. Lowering the temps of a component won't make your room cooler.I truly upgraded because I was tired of 3090 temps turning my room into a sauna, hoping the 4090 and a bigger case with a lot more airflow will do the trick.
Running a decent tuned 5950x / b-die DDR4 3200@3600 CL14 (mostly stock) setup, so likely no given the complete platform overhaul. Still interested and if the performance delta is there... maybe?
If I had your system I wouldn't upgrade. Maybe 3 years from now.
It depends on what you demand from your system, in gaming. 5800X3D can be a real benefit to high refresh gamers on high end GPUs, compared to regular Zen 3 CPUs. As it brought next gen CPU performance, to AM4. In many games, it delivers minimum framerates approximately equal to the average framerates for the non-X3D chips. That's pretty crazy. And we can see now, that is similar to Zen 4 chips on AM5 and Intel's Raptor Lake.Ditto. Even if there's some level of uplift, it's going to be situational and only noticeable in super specific circumstances. By the time a game makes the difference noticeable the 8950X3D (or 9750X3d) could be an even better option. If I owned any of the higher end Zen4 parts I'd stick with 'em. The X3D parts might be the best choice for a new buyer (maybe?), but it seems pointless for current owners.
Well if you haven't heard, they are releasing 7900X3D and 7950X3D over a month before the 7800X3D.I was sad they never released a 5950X3D with 1 3d ccd and one good 5.0Ghz+ boosting non 3d ccd.
I use my system for 4k gaming, but also do alot of work on VMs for my job and like having the extra ass for that type of work when needed. Not gonna swap CPUs in and out based on the task at hand... lol
Yeah, I know, but for 4K gaming, not worth a new platform move at this time. Most my games with a 4090 are either locked at 144Hz, or the frames rates are still well above 100 FPS. Can't say I have noticed the 1% lows being a factor in any game I play. 4K tends to be an equalizer in that regard and FPS swings are not so vast to make it noticable.Well if you haven't heard, they are releasing 7900X3D and 7950X3D over a month before the 7800X3D.
I mean...the $599.99 combo at Micro Center exists. I was able to make money selling my AM4 gear to go that route.If funds were unlimited then we would never have this discussion.