Threadripper 7980X & 7970X benchmarks

I wonder if Civilization could leverage it, a core for every civ should speed up those turns late game :)

yeah my googling was turning up that civ6 tops out around 20 threads....

I did all my testing back in 2019 after first buying the threadripper. I probably posted it on here somewhere :p

What I do remember was that it was WAY fewer threads than I expected based on all of their bragging about multithreading.

Of course I did.

264676_Civ6_GS_AI_Bench_DX12.png


From this thread (which has more context)
 
Last edited:
I think the whole idea of having HEDT workstation is so you can multi task doing work while you play. Like I have Virtual machines running in the background on a fusion io card while I run off some photos in lightroom in a batch or encode 4k video between two nvm drives while macro crafting in FF14 with a video playing in chrome with lots o tabs open while playing another game I actually max out this 3960x at 100% often so it was totally worth it. What annoyed me was I couldn't drop in a 5970x 32 core upgrade when I thought I could later. May get whatever Threadripper is the newest in 2025 when Windows 10 goes EOL. I am building a new server this year.

Onboard stuff is a mixed blessing sometimes you actually want onboard stuff if your going to shove two 3090 cards into a build for AI but other times you want the freedom of what controllers you want to use like running multiple fusion io / m2 ssd cards / hbas for spinning rust but
 
I know I'm never concerned about what I'm doing while I'm gaming. I have 16c/32t. Never feel any issues. Have several VMs, etc. Might be streaming TNF, or something on Plex, or something on youtube, etc.
 
I know I'm never concerned about what I'm doing while I'm gaming. I have 16c/32t. Never feel any issues. Have several VMs, etc. Might be streaming TNF, or something on Plex, or something on youtube, etc.

Old habits die hard.

I have a 24C/48T Threadripper 3960x, and I still shut down everything before starting a game, like it's 1996 and I have just one core and just barely enough RAM to make it run. I won't even leave anything running in the task bar. I want to make absolutely sure nothing gets in the way ever, even for a moment, if the scheduler messes up.
 
Old habits die hard.

I have a 24C/48T Threadripper 3960x, and I still shut down everything before starting a game, like it's 1996 and I have just one core and just barely enough RAM to make it run. I won't even leave anything running in the task bar. I want to make absolutely sure nothing gets in the way ever, even for a moment, if the scheduler messes up.
i force close several things before running some games, not like we used to, its been a hard habit to break
 
I just came here to look at the Cinibench R23 multi-thread score:

Paul's Hardware got 95,270 for the big chip. HOLY MOLY.

My reference there, is it takes about 18,000 points, to encode a high quality 720p stream to Twitch, on the CPU.
 
Last edited:
I just came here to look at the Cinibench R23 multi-thread score:

Paul's Hardware got 95,270 for the big chip. HOLY MOLY.

My reference there, is it takes about 18,000 points, to encode a high quality 720p stream to Twitch, on the CPU.

So you only need 12 of the cores for that? :p
 
I was waiting for the reviews. So it is not as impressive as when the 3000 series threadrippers launched and shocked everybody with a whopping 64 cores. It is not as good for gaming, and there are less than expected pci lanes plus the desktop mainstream cpus are up there. And the motherboards are what? more than 1000 USD ? That's too much for just the board. I remember for the 3000 series it was like 500 $. I think the new non-pro threadrippers are harder to justify now unless you have specific multi-threaded needs. They are less of a "do it all" chip.

I want to see how they overclock with watercooling, but maybe it is still early for this.
 
and there are less than expected pci lanes
How many were expected versus how many did we got ?

I feel 80 for the non-pro edition could be more than many expected (64 would have been many people guess and possibly the choise would they have choosen to go the 100% gen5 configuration)
 
I was waiting for the reviews. So it is not as impressive as when the 3000 series threadrippers launched and shocked everybody with a whopping 64 cores. It is not as good for gaming, and there are less than expected pci lanes plus the desktop mainstream cpus are up there. And the motherboards are what? more than 1000 USD ? That's too much for just the board. I remember for the 3000 series it was like 500 $. I think the new non-pro threadrippers are harder to justify now unless you have specific multi-threaded needs. They are less of a "do it all" chip.

I want to see how they overclock with watercooling, but maybe it is still early for this.

Yeah, we already said that when the announcements came - the Pro is actually more attractive. In addition to the obvious factors there also is the nitpick that TR this time around only supports 1 DIMM per memory channel. So non-Pro can only ever have 4 modules.

As for PCIe lanes - 48 is pretty short. That is just 3 slots of PCIe x16. 2 Graphics cards and a 4x NVMe card and they are swallowed up. That is without first diverting anything toward on-mainboard devices.
 
Pro performance, pro price as always. ;-)

As for playing games, doing that on this is like buying a tank to go squirrel hunting. Cut that crap out! :-P
Of course it is not for gaming, and if your main task is gaming then going for threadripper is a bad choice both performance wise and cost wise. However, the previous approach was better I think. I don't game as much as before, but I'd like to have a machine powerful enough to run games smoothly if I decide to try some titles.
 
As for PCIe lanes - 48 is pretty short. That is just 3 slots of PCIe x16. 2 Graphics cards and a 4x NVMe card and they are swallowed up. That is without first diverting anything toward on-mainboard devices.
Isn't 80 ? 48 pcie 5.0 + 32 pcie 4.0 ?, I think 88 in totals lane are usable when you had the chipset.
 
Yeah, we already said that when the announcements came - the Pro is actually more attractive. In addition to the obvious factors there also is the nitpick that TR this time around only supports 1 DIMM per memory channel. So non-Pro can only ever have 4 modules.

As for PCIe lanes - 48 is pretty short. That is just 3 slots of PCIe x16. 2 Graphics cards and a 4x NVMe card and they are swallowed up. That is without first diverting anything toward on-mainboard devices.
Sorry for my ignorance, but I want to understand this memory thing better. So now you can only install a maximum of 4 RAM sticks for non-pro threadrippers? This is because this threadripper is a quad channel memory and you can only have 1 stick per channel? And in order to run in quad channel mode you will need 4 sticks (DIMMS). But if you use 4 sticks with DDR5, your RAM speed will be lower? So you should use 2 DIMMS if you want a higher speed, but this way it will run in dual channel mode?

As I can see the maximum RAM for this threadripper is 1TB, so if you want 1TB of RAM, you will need to buy 4 sticks of 256 GB each? Thanks.
 
Pro performance, pro price as always. ;-)

As for playing games, doing that on this is like buying a tank to go squirrel hunting. Cut that crap out! :-P

You don't buy these for gaming. You buy these for the pro features if you need them, otherwise you are wasting your money.

That said, it is nice to know that you don't need a whole other machine for adequate performance should you want to run an occasional game on the same machine you game on. And in that regard, these are more workstation than HEDT. HEDT is seemingly dead.
 
You don't buy these for gaming. You buy these for the pro features if you need them, otherwise you are wasting your money.

That said, it is nice to know that you don't need a whole other machine for adequate performance should you want to run an occasional game on the same machine you game on. And in that regard, these are more workstation than HEDT. HEDT is seemingly dead.

Sarcasm meters don't need ECC memory! :-P

Workstations were and are always about server grade components for doing work on the desktop. ECC memory, multiple sockets, SCSI and then SAS drives, professional display adapters. Too many desktop class machines get called workstations. Back in the day when Sun and SGI built actual workstations that cost over $100k, the nomenclature meant something. ;-)

HEDT can be done with high end Intel and AMD desktop parts. If you need more then that's where TRx comes in to play. These parts are expensive. However to get comparable performance from Intel it's even more expensive. I've built custom systems for people that want more of the same or completely different in less than a year. Their return on their investment happens in weeks on some projects. Cost is of secondary concern. Time is super important.
 
Sarcasm meters don't need ECC memory! :-P

Workstations were and are always about server grade components for doing work on the desktop. ECC memory, multiple sockets, SCSI and then SAS drives, professional display adapters. Too many desktop class machines get called workstations. Back in the day when Sun and SGI built actual workstations that cost over $100k, the nomenclature meant something. ;-)

HEDT can be done with high end Intel and AMD desktop parts. If you need more then that's where TRx comes in to play. These parts are expensive. However to get comparable performance from Intel it's even more expensive. I've built custom systems for people that want more of the same or completely different in less than a year. Their return on their investment happens in weeks on some projects. Cost is of secondary concern. Time is super important.
Threadripper 3000 worked that way, but as we can see from the performance tests, the recently released threadrippers, both pro and non-pro kind of suck at client workloads. As does Intel's Sapphire Rapids WS.

That means that right now there is no HEDT platform at any price point. You either need to go consumer to get max performance in client workloads, but lose out in PCIe lanes and memory channels OR you have to go all workstation, and give up client workload performance to get the PCIe lanes and memory channels. This is what HEDT used to do. It was the best of both worlds. Now that doesn't exist anymore.

Part of this is due to the surprisingly large drag of registered RAM latency, which really hurts performance, and since with DDR-5 registered and unregistered RAM slots are physically different, they can't really offer both on the same board anymore (at least not easily) so you cant put regular RAM in in a workstation board like you used to be able to.

While I would LOVE to have eight 16x slots, I'm really not asking for much. Just give me a consumer board that allows me to use a 16x-8x configuration without downgrading the main slot to 8x, and I'll buy it and shut up. The 8x doesn't even need to be current gen. I'm OK with it being Gen3.
 
Yeah I agree there's no boards like that. X58 had a plethora of them even though they used switches and ran hot. But you could load up with storage adapters, for starters.
The EVGA Classified X58 and Asus P6T7 "Supercomputer" workstation boards were examples. The former, though was strange as it was a larger (longer) board outside of ATX spec and not many cases could physically accommodate it. Lian Li, Windy, Caselabs, Mountain Mods, et-al needed. Those were the days!

Fortunately the majority of the workloads we built for were not particularly sensitive to memory performance. It was far more advantageous to have as many cores and as much ram as possible.

Those systems running Windows 7 and even XP X64 had MUCH lower DPC latency then systems today. Sub 20 µS latency vs. triple digit µS commonplace now.
 
Threadripper 3000 worked that way, but as we can see from the performance tests, the recently released threadrippers, both pro and non-pro kind of suck at client workloads. As does Intel's Sapphire Rapids WS.

That means that right now there is no HEDT platform at any price point. You either need to go consumer to get max performance in client workloads, but lose out in PCIe lanes and memory channels OR you have to go all workstation, and give up client workload performance to get the PCIe lanes and memory channels. This is what HEDT used to do. It was the best of both worlds. Now that doesn't exist anymore.

Part of this is due to the surprisingly large drag of registered RAM latency, which really hurts performance, and since with DDR-5 registered and unregistered RAM slots are physically different, they can't really offer both on the same board anymore (at least not easily) so you cant put regular RAM in in a workstation board like you used to be able to.

While I would LOVE to have eight 16x slots, I'm really not asking for much. Just give me a consumer board that allows me to use a 16x-8x configuration without downgrading the main slot to 8x, and I'll buy it and shut up. The 8x doesn't even need to be current gen. I'm OK with it being Gen3.
Use a big case that can host two systems, put one for gaming and regular workloads, and one as a workstation :ROFLMAO:

I can't believe I'm saying this.
 
Sorry for my ignorance, but I want to understand this memory thing better. So now you can only install a maximum of 4 RAM sticks for non-pro threadrippers? This is because this threadripper is a quad channel memory and you can only have 1 stick per channel? And in order to run in quad channel mode you will need 4 sticks (DIMMS). But if you use 4 sticks with DDR5, your RAM speed will be lower? So you should use 2 DIMMS if you want a higher speed, but this way it will run in dual channel mode?

As I can see the maximum RAM for this threadripper is 1TB, so if you want 1TB of RAM, you will need to buy 4 sticks of 256 GB each? Thanks.

With these platforms more than 2 sticks are no longer slower like they are on desktop.
 
I am quite mixed on these.

As another multi-generation TR owner still using a 3960x as a primary system, I’ve been extremely delighted with the longevity of this platform. I view these as “do everything pretty well, for many years” platforms that trace their lineage all the way back to chips and platforms like the i7-920. My 3960x has been fast, flexible and dependable; for my work (simulation, video processing) it’s still better than almost anything with 16 cores, and while we frequently see 1080p gaming benchmarks showing these platforms coming up short, with something like a 4090 and maxed out settings at 4K, I appear to be only giving up a few percentage points to the latest and greatest gaming CPUs while having gobs of I/O and slots for networking, storage, and various I/O cards. Given all of this, I ought to be a slam dunk customer for a 7960x or 7970x.

But, there‘s a problem. A couple of years ago I got a laptop with a M1 Max CPU and while being a laptop the I/O isn’t exactly plentiful, as a CPU it was within spitting distance of the 3960–while running on battery power! I’ve just ordered a M3 Max MBP that by benchmarks appears to be a dramatic uplift over the M1 Max, and unlike Threadripper I get ANE cores with good software support, and gobs of GPU that I can tap into for sim as well. It appears to be every bit as capable as the new Threadrippers—if not more so—while being a hell of a lot more efficient, portable etc. The issue here simply becomes a matter of software. I like having one machine that I can work on, but then fire up Starfield or Cybertpunk. I can’t do that on a Mac—I need Windows. The Mac is the better tool for my work. x86 is still the better tool for gaming. If I decide to use the laptop as my primary machine for work, that will be fine but it means I will need a PC as well for gaming. And in that case, HEDT is completely unnecessary - I should just get a 7800X3D and call it a day.

So, I’m thinking hard about whether this platform will be in my future or if this is where I move away from an unbroken line of HEDT machines going all the way back to that i7-920.
 
I am quite mixed on these.

As another multi-generation TR owner still using a 3960x as a primary system, I’ve been extremely delighted with the longevity of this platform. I view these as “do everything pretty well, for many years” platforms that trace their lineage all the way back to chips and platforms like the i7-920. My 3960x has been fast, flexible and dependable; for my work (simulation, video processing) it’s still better than almost anything with 16 cores, and while we frequently see 1080p gaming benchmarks showing these platforms coming up short, with something like a 4090 and maxed out settings at 4K, I appear to be only giving up a few percentage points to the latest and greatest gaming CPUs while having gobs of I/O and slots for networking, storage, and various I/O cards. Given all of this, I ought to be a slam dunk customer for a 7960x or 7970x.

But, there‘s a problem. A couple of years ago I got a laptop with a M1 Max CPU and while being a laptop the I/O isn’t exactly plentiful, as a CPU it was within spitting distance of the 3960–while running on battery power! I’ve just ordered a M3 Max MBP that by benchmarks appears to be a dramatic uplift over the M1 Max, and unlike Threadripper I get ANE cores with good software support, and gobs of GPU that I can tap into for sim as well. It appears to be every bit as capable as the new Threadrippers—if not more so—while being a hell of a lot more efficient, portable etc. The issue here simply becomes a matter of software. I like having one machine that I can work on, but then fire up Starfield or Cybertpunk. I can’t do that on a Mac—I need Windows. The Mac is the better tool for my work. x86 is still the better tool for gaming. If I decide to use the laptop as my primary machine for work, that will be fine but it means I will need a PC as well for gaming. And in that case, HEDT is completely unnecessary - I should just get a 7800X3D and call it a day.

So, I’m thinking hard about whether this platform will be in my future or if this is where I move away from an unbroken line of HEDT machines going all the way back to that i7-920.

I made the step to use a dedicated gaming machine long ago. Just security concerns warrant it. There is a lot of garbage code in gaming. On the dedicated machine I don't care.
 
dedicated gaming machine long ago.
3d cache tech vs high core count make this quite interesting in the hardware side of things, specially if the work computer does not need heavy GPU power. With how much cost a nice TR setup, the price of a cheap mATX 7800x3d setup is probably worth it versus loosing anything on the TR machine to make it better at gaming.

And modern TV can make the very easy to move to if not setup in a different place gaming computer interesting as well.

That said, specially in AAA gaming, I suspect it is low on the garbage code versus non gaming code, there a reason in 1/120 of a second Call of Duty can do so much versus your average application, there a lot of money with the best people obsess about non garbagy involved
 
With these platforms more than 2 sticks are no longer slower like they are on desktop.
Oh I see, so it is not an issue with threadrippers. That's good. Any other downside for using more than 2 sticks in a threadripper system? Thanks.
 
I'm with Zara on this generally - when I bought my HEDT it was the no-compromises (except a considerable though reasonable cost increase) "enthusiast/overclocker mixed use" platform. I spent years with X58 and even today my old X99 system may not top benchmarks but has been exceptionally capable for gaming and other generalized usage. "old school HEDT" generally came out from relatively shortly after to even BEFORE the mainstream chipset , had equal or greater core numbers AND similar or better frequencies / cache / overclock capabilities, chipsets with features and options well beyond the mainstream; tri or quad channel RAM, additional PCI-E lanes, and often (at least on higher end boards) E-ATX chipset full of massive expansion potential,both in terms of physical ports and features to make use of them. Onboard components were typically higher end than what one would expect to find on a mainstream chipset premium board, and certain standards (ie 10Gb ethernet) would come first to these vectors figuring that it would be the HEDT owners to make use of them once they left the aftermarket or server/workstation/commerical only fields. Overclocking and other usage was unlocked and performance focused,with comprehensive BIOS/UEFI options, subject to variations depending on the tier/type of mobo you mayhave chosen.

The idea was of a powerful, long supported, expandable, and widely capable platform made for a variety of potential evolving workloads. It could game at the levels of the best mainstream boards or better, while also running creative, server, or other types of tasks with high performance; perhaps not as high and without some of the features of the massively expensive Xeon or EPYC, but considerble none the less. Price increases were significant but often reasonable given the platform changes; often a "top" CPU was $1000 or so with others being comparable to some of the higher end mainstream platform CPUs. The mobos had a surcharge for the platform and were a little more expensive then an equivalent tier for the mainstream, but it wasn't massive. All of this started to change in recent years when HEDT started either going away entirely and what was left was instead "Workstation/Server, Junior Grade". Processors got more and more expensive , focusing primarily on "moar cores", and chipsets in kind grew in price significantly. Single threaded performance or overclockability often wasn't up to par with the mainstream platform in actuality and of course the value and price was way out of wack vs what it used to be. Availability was more limited as well and launch times tended to come later in the equivalent process' life, adding more questions. For those that had primary high-core workloads that could justify the increased cost iof the platform t was still a good option and a step between the mainstream and the massive expense of Epyc/Xeon , but this was a markedly smaller use case than what HEDT offered previously.

I'm a bit disappointed in some of the benchmarks with single or few core performance, including the gaming ones, it seems that the TR7000 series is in some cases not even the equal of the Ryzen 7800X3D or 7950X3D which seems unusual to me. In theory these chips, even if they were only using a few of their cores and at least able to turbo up to the stated 5.3ghz, all while having comparable or even superior level of cache accessible symmetrically plus greater RAM bandwidth with quad channel , should at least equal the mainstrea m if not surpass it, right? I grant there may be a certain amount of "too many cores" or other scheduler issues, but it seems that should be less of a problem and in the past those tended to cause games to crash not simply under perform on framerate. Maybe these issues will resolve in time with the platform rolling out, firmware and software updates etc...but that's a hard thing to purchase on an "if'. Now clearly at last from the Guru3d reviews even if they're not chart topping the gaming performance / single or few core performance / frequency isn't horrible, but especially at those prices "not horrible" is hard to justify, especially when its not quite clear why they're not more performant. Still, it is a step forward compared to the near entirely high-core workload focus of the previous generation, but the similar higher pricing, and later debut makes it a bit harder to choose and, at least at the moment, not a full return to HEDT form that some, myself included, were hoping.
 
AVX 512/AV2 Work loads is where the 7000 series seems to shine since the 3000 series didn't support it.

View attachment 615380
The 7970X is the sweet spot there for sure!

I remember picking up a 3990X and Zenith Alpha combo at Microcenter and they gave $300 off for the combo. 10X more than normal CPU/Mobo combo ($30). Now it's just $20.
 
I'm with Zara on this generally - when I bought my HEDT it was the no-compromises (except a considerable though reasonable cost increase) "enthusiast/overclocker mixed use" platform. I spent years with X58 and even today my old X99 system may not top benchmarks but has been exceptionally capable for gaming and other generalized usage. "old school HEDT" generally came out from relatively shortly after to even BEFORE the mainstream chipset , had equal or greater core numbers AND similar or better frequencies / cache / overclock capabilities, chipsets with features and options well beyond the mainstream; tri or quad channel RAM, additional PCI-E lanes, and often (at least on higher end boards) E-ATX chipset full of massive expansion potential,both in terms of physical ports and features to make use of them. Onboard components were typically higher end than what one would expect to find on a mainstream chipset premium board, and certain standards (ie 10Gb ethernet) would come first to these vectors figuring that it would be the HEDT owners to make use of them once they left the aftermarket or server/workstation/commerical only fields. Overclocking and other usage was unlocked and performance focused,with comprehensive BIOS/UEFI options, subject to variations depending on the tier/type of mobo you mayhave chosen.

The idea was of a powerful, long supported, expandable, and widely capable platform made for a variety of potential evolving workloads. It could game at the levels of the best mainstream boards or better, while also running creative, server, or other types of tasks with high performance; perhaps not as high and without some of the features of the massively expensive Xeon or EPYC, but considerble none the less. Price increases were significant but often reasonable given the platform changes; often a "top" CPU was $1000 or so with others being comparable to some of the higher end mainstream platform CPUs. The mobos had a surcharge for the platform and were a little more expensive then an equivalent tier for the mainstream, but it wasn't massive. All of this started to change in recent years when HEDT started either going away entirely and what was left was instead "Workstation/Server, Junior Grade". Processors got more and more expensive , focusing primarily on "moar cores", and chipsets in kind grew in price significantly. Single threaded performance or overclockability often wasn't up to par with the mainstream platform in actuality and of course the value and price was way out of wack vs what it used to be. Availability was more limited as well and launch times tended to come later in the equivalent process' life, adding more questions. For those that had primary high-core workloads that could justify the increased cost iof the platform t was still a good option and a step between the mainstream and the massive expense of Epyc/Xeon , but this was a markedly smaller use case than what HEDT offered previously.

I'm a bit disappointed in some of the benchmarks with single or few core performance, including the gaming ones, it seems that the TR7000 series is in some cases not even the equal of the Ryzen 7800X3D or 7950X3D which seems unusual to me. In theory these chips, even if they were only using a few of their cores and at least able to turbo up to the stated 5.3ghz, all while having comparable or even superior level of cache accessible symmetrically plus greater RAM bandwidth with quad channel , should at least equal the mainstrea m if not surpass it, right? I grant there may be a certain amount of "too many cores" or other scheduler issues, but it seems that should be less of a problem and in the past those tended to cause games to crash not simply under perform on framerate. Maybe these issues will resolve in time with the platform rolling out, firmware and software updates etc...but that's a hard thing to purchase on an "if'. Now clearly at last from the Guru3d reviews even if they're not chart topping the gaming performance / single or few core performance / frequency isn't horrible, but especially at those prices "not horrible" is hard to justify, especially when its not quite clear why they're not more performant. Still, it is a step forward compared to the near entirely high-core workload focus of the previous generation, but the similar higher pricing, and later debut makes it a bit harder to choose and, at least at the moment, not a full return to HEDT form that some, myself included, were hoping.
Very well said. You expressed my feelings in your writing.
 
The 7970X is the sweet spot there for sure!

I remember picking up a 3990X and Zenith Alpha combo at Microcenter and they gave $300 off for the combo. 10X more than normal CPU/Mobo combo ($30). Now it's just $20.
Yeah, the 7970X is what I'm planning to get if I decided to pull the trigger. What sort of performance percentage you will lose for gaming if you pair this threadripper with the top video card? Compared to mainstream cpus. I mean, games are highly dependent on video cards.
 
I get it, they don’t straddle the line like they used too.
Threadripper of old was the perfect feature set for using it for whatever you wanted.
The current iteration has very much picked a lane.
Yup. The "do anything and everything all at once" systems. Make them into anything. Use it for anything. Jack of all trades, master of none, but far far better than a master of one.
I think the whole idea of having HEDT workstation is so you can multi task doing work while you play. Like I have Virtual machines running in the background on a fusion io card while I run off some photos in lightroom in a batch or encode 4k video between two nvm drives while macro crafting in FF14 with a video playing in chrome with lots o tabs open while playing another game I actually max out this 3960x at 100% often so it was totally worth it. What annoyed me was I couldn't drop in a 5970x 32 core upgrade when I thought I could later. May get whatever Threadripper is the newest in 2025 when Windows 10 goes EOL. I am building a new server this year.

Onboard stuff is a mixed blessing sometimes you actually want onboard stuff if your going to shove two 3090 cards into a build for AI but other times you want the freedom of what controllers you want to use like running multiple fusion io / m2 ssd cards / hbas for spinning rust but
Yup. I don't mind onboard like Zara does - but I want options as I switch OSes around a bit and a lot of onboard ones suck.
I know I'm never concerned about what I'm doing while I'm gaming. I have 16c/32t. Never feel any issues. Have several VMs, etc. Might be streaming TNF, or something on Plex, or something on youtube, etc.
It's the RAM and the slots for most of us. Baby Threadripper (3950/5950/7950) have the cores - but getting them to 128G+ of RAM is hard (or impossible) at full speed, and I have Optane drives in slots that would slow down my GPU (or I'd run out of slots, or both). Plus extra 10G cards, etc.
I am quite mixed on these.

As another multi-generation TR owner still using a 3960x as a primary system, I’ve been extremely delighted with the longevity of this platform. I view these as “do everything pretty well, for many years” platforms that trace their lineage all the way back to chips and platforms like the i7-920. My 3960x has been fast, flexible and dependable; for my work (simulation, video processing) it’s still better than almost anything with 16 cores, and while we frequently see 1080p gaming benchmarks showing these platforms coming up short, with something like a 4090 and maxed out settings at 4K, I appear to be only giving up a few percentage points to the latest and greatest gaming CPUs while having gobs of I/O and slots for networking, storage, and various I/O cards. Given all of this, I ought to be a slam dunk customer for a 7960x or 7970x
Seconded. And exactly agreed.
But, there‘s a problem. A couple of years ago I got a laptop with a M1 Max CPU and while being a laptop the I/O isn’t exactly plentiful, as a CPU it was within spitting distance of the 3960–while running on battery power! I’ve just ordered a M3 Max MBP that by benchmarks appears to be a dramatic uplift over the M1 Max, and unlike Threadripper I get ANE cores with good software support, and gobs of GPU that I can tap into for sim as well. It appears to be every bit as capable as the new Threadrippers—if not more so—while being a hell of a lot more efficient, portable etc. The issue here simply becomes a matter of software. I like having one machine that I can work on, but then fire up Starfield or Cybertpunk. I can’t do that on a Mac—I need Windows. The Mac is the better tool for my work. x86 is still the better tool for gaming. If I decide to use the laptop as my primary machine for work, that will be fine but it means I will need a PC as well for gaming. And in that case, HEDT is completely unnecessary - I should just get a 7800X3D and call it a day.
Yup... :( I have a dedicated shared "gaming box" for the family / friends, but ... yup.
So, I’m thinking hard about whether this platform will be in my future or if this is where I move away from an unbroken line of HEDT machines going all the way back to that i7-920.
Yuuuupppp.
Oh I see, so it is not an issue with threadrippers. That's good. Any other downside for using more than 2 sticks in a threadripper system? Thanks.
No. It's designed for 4 or 8 sticks (quad or octa channel) at full speed. And there are fast RDIMMS.

I'm hoping what we see are some weaknesses with early BIOS/RDIMM support on some of the single threaded benchmarks. I really do. Will be waiting a bit for boards to mature a touch before I move anyway, but I still kinda want one.
 
No. It's designed for 4 or 8 sticks (quad or octa channel) at full speed. And there are fast RDIMMS.

I'm hoping what we see are some weaknesses with early BIOS/RDIMM support on some of the single threaded benchmarks. I really do. Will be waiting a bit for boards to mature a touch before I move anyway, but I still kinda want one.
Great. And hopefully things will mature in a positive way, specially price wise ^_^
 
Oh I see, so it is not an issue with threadrippers. That's good. Any other downside for using more than 2 sticks in a threadripper system? Thanks.

No. You can really drop that concept from yur head. What is happening on desktop with more than 2 sticks is that you are running more than 1 DIMM per channel on unbuffered RAM. TR has 4 (or 8 for pro) channels, so 4 sticks are still 1 DIMM per channel, so no problemo.
 
No. You can really drop that concept from yur head. What is happening on desktop with more than 2 sticks is that you are running more than 1 DIMM per channel on unbuffered RAM. TR has 4 (or 8 for pro) channels, so 4 sticks are still 1 DIMM per channel, so no problemo.
Now if only the Desktop PC market would migrate away from 2 Channel to 4 Channel.
ITX could easily migrate to SODIMM.
The rest of the boards already have 4 slots, so no big board design changes there.
With the way core counts have exploded it’s needed. Gone from 2 to 32+ threads in a consumer PC but still only using 2 channels…
 
It's the RAM and the slots for most of us. Baby Threadripper (3950/5950/7950) have the cores - but getting them to 128G+ of RAM is hard (or impossible) at full speed, and I have Optane drives in slots that would slow down my GPU (or I'd run out of slots, or both). Plus extra 10G cards, etc.

I have 160G
 
I have 160G
And I have 3T on some of my boxes - but building a workstation with that kinda sucks, since most of the RDIMM in that size are slow as crap. You talking in a consumer system? At full speed?
 

And I have 3T on some of my boxes - but building a workstation with that kinda sucks, since most of the RDIMM in that size are slow as crap. You talking in a consumer system? At full speed?
The discussion was about smoothness with the ability to do many things at once. Then somebody assumed something about memory limitations, by assuming what kind of CPU I had... etc.
 
I'm with Zara on this generally - when I bought my HEDT it was the no-compromises (except a considerable though reasonable cost increase) "enthusiast/overclocker mixed use" platform. I spent years with X58 and even today my old X99 system may not top benchmarks but has been exceptionally capable for gaming and other generalized usage. "old school HEDT" generally came out from relatively shortly after to even BEFORE the mainstream chipset , had equal or greater core numbers AND similar or better frequencies / cache / overclock capabilities, chipsets with features and options well beyond the mainstream; tri or quad channel RAM, additional PCI-E lanes, and often (at least on higher end boards) E-ATX chipset full of massive expansion potential,both in terms of physical ports and features to make use of them. Onboard components were typically higher end than what one would expect to find on a mainstream chipset premium board, and certain standards (ie 10Gb ethernet) would come first to these vectors figuring that it would be the HEDT owners to make use of them once they left the aftermarket or server/workstation/commerical only fields. Overclocking and other usage was unlocked and performance focused,with comprehensive BIOS/UEFI options, subject to variations depending on the tier/type of mobo you mayhave chosen.

The idea was of a powerful, long supported, expandable, and widely capable platform made for a variety of potential evolving workloads. It could game at the levels of the best mainstream boards or better, while also running creative, server, or other types of tasks with high performance; perhaps not as high and without some of the features of the massively expensive Xeon or EPYC, but considerble none the less. Price increases were significant but often reasonable given the platform changes; often a "top" CPU was $1000 or so with others being comparable to some of the higher end mainstream platform CPUs. The mobos had a surcharge for the platform and were a little more expensive then an equivalent tier for the mainstream, but it wasn't massive. All of this started to change in recent years when HEDT started either going away entirely and what was left was instead "Workstation/Server, Junior Grade". Processors got more and more expensive , focusing primarily on "moar cores", and chipsets in kind grew in price significantly. Single threaded performance or overclockability often wasn't up to par with the mainstream platform in actuality and of course the value and price was way out of wack vs what it used to be. Availability was more limited as well and launch times tended to come later in the equivalent process' life, adding more questions. For those that had primary high-core workloads that could justify the increased cost iof the platform t was still a good option and a step between the mainstream and the massive expense of Epyc/Xeon , but this was a markedly smaller use case than what HEDT offered previously.

I'm a bit disappointed in some of the benchmarks with single or few core performance, including the gaming ones, it seems that the TR7000 series is in some cases not even the equal of the Ryzen 7800X3D or 7950X3D which seems unusual to me. In theory these chips, even if they were only using a few of their cores and at least able to turbo up to the stated 5.3ghz, all while having comparable or even superior level of cache accessible symmetrically plus greater RAM bandwidth with quad channel , should at least equal the mainstrea m if not surpass it, right? I grant there may be a certain amount of "too many cores" or other scheduler issues, but it seems that should be less of a problem and in the past those tended to cause games to crash not simply under perform on framerate. Maybe these issues will resolve in time with the platform rolling out, firmware and software updates etc...but that's a hard thing to purchase on an "if'. Now clearly at last from the Guru3d reviews even if they're not chart topping the gaming performance / single or few core performance / frequency isn't horrible, but especially at those prices "not horrible" is hard to justify, especially when its not quite clear why they're not more performant. Still, it is a step forward compared to the near entirely high-core workload focus of the previous generation, but the similar higher pricing, and later debut makes it a bit harder to choose and, at least at the moment, not a full return to HEDT form that some, myself included, were hoping.
Great post. My thoughts exactly.
 
I made the step to use a dedicated gaming machine long ago. Just security concerns warrant it. There is a lot of garbage code in gaming. On the dedicated machine I don't care.
I’ve long favored space (one machine, one display, one desk, no shuffle) as my priority but that appears to be something I am increasingly likely to give up.
 
I’ve long favored space (one machine, one display, one desk, no shuffle) as my priority but that appears to be something I am increasingly likely to give up.

I use the same main monitor. Just switching input. Just shuffling keyboards.
 
Back
Top