Threadripper 7980X & 7970X benchmarks

The discussion was about smoothness with the ability to do many things at once. Then somebody assumed something about memory limitations, by assuming what kind of CPU I had... etc.
In all fairness, I might have been drunk
I’ve long favored space (one machine, one display, one desk, no shuffle) as my priority but that appears to be something I am increasingly likely to give up.
Yup.
I use the same main monitor. Just switching input. Just shuffling keyboards.
Good point actually. Good point.
 
I'm with Zara on this generally - when I bought my HEDT it was the no-compromises (except a considerable though reasonable cost increase) "enthusiast/overclocker mixed use" platform. I spent years with X58 and even today my old X99 system may not top benchmarks but has been exceptionally capable for gaming and other generalized usage. "old school HEDT" generally came out from relatively shortly after to even BEFORE the mainstream chipset , had equal or greater core numbers AND similar or better frequencies / cache / overclock capabilities, chipsets with features and options well beyond the mainstream; tri or quad channel RAM, additional PCI-E lanes, and often (at least on higher end boards) E-ATX chipset full of massive expansion potential,both in terms of physical ports and features to make use of them. Onboard components were typically higher end than what one would expect to find on a mainstream chipset premium board, and certain standards (ie 10Gb ethernet) would come first to these vectors figuring that it would be the HEDT owners to make use of them once they left the aftermarket or server/workstation/commerical only fields. Overclocking and other usage was unlocked and performance focused,with comprehensive BIOS/UEFI options, subject to variations depending on the tier/type of mobo you mayhave chosen.

The idea was of a powerful, long supported, expandable, and widely capable platform made for a variety of potential evolving workloads. It could game at the levels of the best mainstream boards or better, while also running creative, server, or other types of tasks with high performance; perhaps not as high and without some of the features of the massively expensive Xeon or EPYC, but considerble none the less. Price increases were significant but often reasonable given the platform changes; often a "top" CPU was $1000 or so with others being comparable to some of the higher end mainstream platform CPUs. The mobos had a surcharge for the platform and were a little more expensive then an equivalent tier for the mainstream, but it wasn't massive. All of this started to change in recent years when HEDT started either going away entirely and what was left was instead "Workstation/Server, Junior Grade". Processors got more and more expensive , focusing primarily on "moar cores", and chipsets in kind grew in price significantly. Single threaded performance or overclockability often wasn't up to par with the mainstream platform in actuality and of course the value and price was way out of wack vs what it used to be. Availability was more limited as well and launch times tended to come later in the equivalent process' life, adding more questions. For those that had primary high-core workloads that could justify the increased cost iof the platform t was still a good option and a step between the mainstream and the massive expense of Epyc/Xeon , but this was a markedly smaller use case than what HEDT offered previously.

I'm a bit disappointed in some of the benchmarks with single or few core performance, including the gaming ones, it seems that the TR7000 series is in some cases not even the equal of the Ryzen 7800X3D or 7950X3D which seems unusual to me. In theory these chips, even if they were only using a few of their cores and at least able to turbo up to the stated 5.3ghz, all while having comparable or even superior level of cache accessible symmetrically plus greater RAM bandwidth with quad channel , should at least equal the mainstrea m if not surpass it, right? I grant there may be a certain amount of "too many cores" or other scheduler issues, but it seems that should be less of a problem and in the past those tended to cause games to crash not simply under perform on framerate. Maybe these issues will resolve in time with the platform rolling out, firmware and software updates etc...but that's a hard thing to purchase on an "if'. Now clearly at last from the Guru3d reviews even if they're not chart topping the gaming performance / single or few core performance / frequency isn't horrible, but especially at those prices "not horrible" is hard to justify, especially when its not quite clear why they're not more performant. Still, it is a step forward compared to the near entirely high-core workload focus of the previous generation, but the similar higher pricing, and later debut makes it a bit harder to choose and, at least at the moment, not a full return to HEDT form that some, myself included, were hoping.
Was the big change with HEDT coming later Intel's massive delays with Sandy Bridge-E compared to regular and then seemingly every generation thereafter?

The regular non-"C" cores on the chiplets with Threadripper are supposed to be the same, but better binned for power efficiency as the chiplets on regular Ryzen right? Current Threadripper would probably need a major BIOS adjustment to allow small numbers of cores to really use all of Threadripper's power envelope and the thermal headroom offered by the chip's extra large footprint and heat spreader. Even then, at best you'd only be a little ahead of a 7950x in terms of 1-4 core frequencies plus whatever help you could from quad compared to dual channel RAM (minus whatever additional latency penalty there is for ECC).
 
Was the big change with HEDT coming later Intel's massive delays with Sandy Bridge-E compared to regular and then seemingly every generation thereafter?

The regular non-"C" cores on the chiplets with Threadripper are supposed to be the same, but better binned for power efficiency as the chiplets on regular Ryzen right? Current Threadripper would probably need a major BIOS adjustment to allow small numbers of cores to really use all of Threadripper's power envelope and the thermal headroom offered by the chip's extra large footprint and heat spreader. Even then, at best you'd only be a little ahead of a 7950x in terms of 1-4 core frequencies plus whatever help you could from quad compared to dual channel RAM (minus whatever additional latency penalty there is for ECC).
There were a couple of reasons for the big change in HEDT, yes Intel was constantly delayed with their validation and binning problems from their fab issues which then forced them to recycle designs or modify designs to work on older nodes.
But the real issue was that HEDT sort of died off, as the software didn't keep pace with the hardware offerings, so we reached a point where the i9 with a decent motherboard performed as well if not better than the much more expensive Xeon offerings, not because the Xeon wasn't good, but because the software wasn't updated to use the capabilities so the faster clock speeds there were able to deliver bigger performance gains and the cost/benefit of using a XeonW part just wasn't there as it added little to no tangible performance increase. Now there were some instances where the decreased PCIe lanes for a workstation were a significant issue, but eSata, Thunderbolt, and advancements in USB made it so many of those devices could be moved to an external device, or into an enclosure, or faster PCIe lanes could be bifurcated and stepped down, 16 lanes of PCIe 3 converted to 32 lanes of PCIe2 sort of things with add-in cards and cable risers. But that brought about a decrease in demand which raised prices and caused Intel to put their HEDT offerings on a shelf way in the back as they shifted resources elsewhere.

Threadripper originally launched into a weird place in the market, it was positioned to be stupidly disruptive, so much so that Intel took a look at it and said, "You want to do that, at that price point, with that level of support... OK fine by us have fun!"
And Yes AMD gained a foothold because it was cheap and sort of did everything, but Intel was right and they could not maintain the support, feature set, or price, and has adjusted accordingly with future releases, burning more than a few bridges along the way.

Much of why games just don't perform on Threadripper as they do on Ryzen has more to do with development than it does with the CCXs or how they are joined with the CCDs, games for the most part still only use 2 cores for the heavy workloads, game logic for other parts expand downward to 6 or maybe 8 threads but those threads are significantly lighter and filled with empty jobs to keep clock speeds and consistency, but to maintain memory access consistency games are designed around 2 primary threads, as in a dual channel memory system only 2 threads can be talking with system ram at any given time, and expanding out the primary game threads past 2 introduces several issues with engine scheduling that few developers are equipped to handle, and would cost more in development than they are willing to spend. Games just don't automatically scale out as workstation jobs do, games by their nature are highly interactive with an extreme number of input interrupts, and developers can't just let things expand because resources are there it leads to unpredictable results that are complicated and expensive to troubleshoot.
Workstation jobs though are generally broken into batches, where each is independent and tagged so they can each be processed out-of-order as threads become available and you can safely keep memory access available in a simple round-robin approach.
So games, and their engines, are designed for 2 heavy workload threads, 6 to 8 sub-threads that get sparsely used for game elements, and 2 memory channels, all hardcoded. Pair that with the much higher latencies with the RDIMMs, because even at equal timing values the RDIMMs are doing extra work in there which increases actual read/write times, and throw in a memory controller that is not at all optimized for software demanding only a subset of its channels and you just get a far less optimal environment for those games even if the environment itself is better in every measurable way.

Aside/Rant/Note
But AMD's CCX design has proven to be the big winner here, Ryzen through to EPYC all use the same CCX design, and the only variation is how they attach them in the CCD, and how many CCDs there are, so AMD can save vast quantities of money on design, validation, implementation, and back end programming because of the huge amount of part reuse they are facilitating across their entire product range. Until Intel gets their house in order and does the same AMD will continue to mop them up because they can do more for less because they are doing less and using it more. TSMC being a full 2 generations ahead of Intel doesn't help them any either, Intel's fabrication time may be cheaper because all of it is mature, but Intel's inability to perform part reuse to anything close to AMD's capabilities more than covers that spread, resulting in Intel having to spend more to ultimately do less. Intel can rename all its nodes all it wants, but the Intel 7 process is just 10nm +++ and not much better than the TSMC 9nm node with is their 10nm++, which is slightly behind the Samsung 8nm node, hence the Intel 7nm name, but it is nowhere near as good as the TSMC N7 node let alone the N6 which is N7++.
 
Last edited:

Well, this helped make my decision easier I think. On prior gens it was a competitive "general purpose" CPU while also being a top-notch workstation CPU. Even the HEDT parts are really "workstation" CPUs now, rather than monstrosities that they used to be. I'll miss my Threadripper, but it looks like for most of us, that time may be passing.

Sigh. This actually makes me sad.
 
I've actually done 5 workstation builds for our office for the purpose of running computational fluid dynamics (CFD).
CFD is unique in that it scales almost perfectly by core count (e.g. 32 cores is faster than 28 is faster than 12 etc.)
CFD also does NOT benefit from multi-threadding (actually slows the CPU down so we disable it in the BIOS)

BUT

CFD is very memory bandwidth intensive. We have an EPYC 7702P build but can only use 32 of the 64 cores. Thought AMD may have resolved the issue on the Threadripper 5995wx but same deal with that build. Limitation of the application.

Point being there are real-world applications that benefit from these chips. My builds run maxxed-out CPU cores 24/7/365. These are awesome chips - kudos to AMD for breaking the Intel tax because we got MUCH less value for MUCH greater cost out of our previous Xeon builds.
 
I've actually done 5 workstation builds for our office for the purpose of running computational fluid dynamics (CFD).
CFD is unique in that it scales almost perfectly by core count (e.g. 32 cores is faster than 28 is faster than 12 etc.)
CFD also does NOT benefit from multi-threadding (actually slows the CPU down so we disable it in the BIOS)

BUT

CFD is very memory bandwidth intensive. We have an EPYC 7702P build but can only use 32 of the 64 cores. Thought AMD may have resolved the issue on the Threadripper 5995wx but same deal with that build. Limitation of the application.

Point being there are real-world applications that benefit from these chips. My builds run maxxed-out CPU cores 24/7/365. These are awesome chips - kudos to AMD for breaking the Intel tax because we got MUCH less value for MUCH greater cost out of our previous Xeon builds.
Now if only per core licensing could go away.
 
Back
Top