Need 16x-8x configuration at same time. Are there any consumer boards?

Yeah, they really don't make interpreting PCIe lane configurations easy.
Which is why manufacturers now including block diagrams is so incredible. At least with the release of AM5, Asrock and Gigabyte have been great about it. MSI and Asus, not so much.
 
Sorry, I have no idea how to make this not embed.
criccio

I would like to access that spreadsheet directly from Google. I will probably upgrade to AMD Zen5 maybe 6 months after announcements, when there are enough motherboard choices. In the past I reflexively used ASIUS because I have had good results with ASUS boards. But I know market conditions change and I should be open to non-ASUS alternatives. That said, doing all the research myself would drive me batshit. The author of this spreadsheet seems to have done all that research.

One point. Buying Intel is like going over to the dark side and signing a long-term lease for an apartment there.
 
criccio

I would like to access that spreadsheet directly from Google. I will probably upgrade to AMD Zen5 maybe 6 months after announcements, when there are enough motherboard choices. In the past I reflexively used ASIUS because I have had good results with ASUS boards. But I know market conditions change and I should be open to non-ASUS alternatives. That said, doing all the research myself would drive me batshit. The author of this spreadsheet seems to have done all that research.

One point. Buying Intel is like going over to the dark side and signing a long-term lease for an apartment there.
DMed. (or.. "started a conversation", I couldn't find a normal direct message option)
 
https://docs.google.com/spreadsheets/d/1NQHkDEcgDPm34Mns3C93K6SJoBnua-x9O-y_6hv8sPs/edit#gid=0
 
Hopefully soon we'll see PCIe 4.0 x4 NICs come out since this is the same bandwidth as PCIe 3.0 x8 and that will invalidate your need for the mythical x16/x8. Or... simply eat a 2% - 3% performance loss on your GPU to run x8/x8; which is a perfectly acceptable trade-off to get very high speed networking on a consumer platform.
PCIe 4.0 NICs have been out since at least 2020.


How much bandwidth can you sustain right now? 10Gb isn't enough but 40Gb is out of reach. That's a massive spread.

Try bifurcating your NIC's slot to x4/x4 in the bios and see how things behave. If performance is acceptable under load, great. You can use the card in a x4 electrical slot until the pcie bandwidth bottleneck actually becomes an issue. After all, your NIC is already bottlenecked by its gen3 x8 interface if you try to saturate both ports.

There are plenty of decent, inexpensive AM5 boards with a gen4 x4 CPU slot. 32Gb starts becoming a problem down the line? Snipe a gen4 NIC (e.g.: MCX516A-CDAT) when the price is right.

Edit: Scrubbed gen5 NIC claims, hit post by accident.
 
My MSI x470 Gaming Plus is X Fire and will surport the 5800x 3D

  • 2 x PCIe 3.0 x16 slots (PCIE_1, PCIE_4)
    • 1st, 2nd and 3rd Gen AMD® Ryzen™ processors support x16/x0, x8/x8 mode
    • Ryzen™ with Radeon™ Vega Graphics and 2nd Gen AMD® Ryzen™ with Radeon™ Graphics processors support x8/x0 mode
    • Athlon™ with Radeon™ Vega Graphics processor supports x4/x0 mode
  • 1 x PCIe 2.0 x16 slot (PCIE_6, supports x4 mode)1
  • 3 x PCIe 2.0 x1 slots
  1. PCI_E6 slot will be unavailable when installing M.2 PCIe SSD in M2_2 slot.
 
My MSI x470 Gaming Plus is X Fire and will surport the 5800x 3D

  • 2 x PCIe 3.0 x16 slots (PCIE_1, PCIE_4)
    • 1st, 2nd and 3rd Gen AMD® Ryzen™ processors support x16/x0, x8/x8 mode
    • Ryzen™ with Radeon™ Vega Graphics and 2nd Gen AMD® Ryzen™ with Radeon™ Graphics processors support x8/x0 mode
    • Athlon™ with Radeon™ Vega Graphics processor supports x4/x0 mode
  • 1 x PCIe 2.0 x16 slot (PCIE_6, supports x4 mode)1
  • 3 x PCIe 2.0 x1 slots
  1. PCI_E6 slot will be unavailable when installing M.2 PCIe SSD in M2_2 slot.
That is the same problem as every other board.

you put the GPU in slot #1 and it runs at x16. Then, you skip slot #2 and put the card in slot #3 which is x16 long, so the card will fit, but it only has the pins for x4, so it runs x4....not the x8 that is being requested.


This dumb pci-e problem has really been vexing me too. I have a similar issue, but at least mine is manageable as I am only deal with 10gbe, not 40.
 
That is the same problem as every other board.

you put the GPU in slot #1 and it runs at x16. Then, you skip slot #2 and put the card in slot #3 which is x16 long, so the card will fit, but it only has the pins for x4, so it runs x4....not the x8 that is being requested.


This dumb pci-e problem has really been vexing me too. I have a similar issue, but at least mine is manageable as I am only deal with 10gbe, not 40.
Maybe the guys who designed the X670E chiipset didn't fully understand use cases.
 
That is the same problem as every other board.

you put the GPU in slot #1 and it runs at x16. Then, you skip slot #2 and put the card in slot #3 which is x16 long, so the card will fit, but it only has the pins for x4, so it runs x4....not the x8 that is being requested.


This dumb pci-e problem has really been vexing me too. I have a similar issue, but at least mine is manageable as I am only deal with 10gbe, not 40.
The only true board I ever owned that would do what your asking is X58 Evga 3way Sli 758e, I lucked up buying late and got the board that made the Xeon 5660x a drop in without the mod, 1.2v board.

Slot 1 and 2 was full x16,slot 3 was x8, I could CX with 1 and 3 for better airflow and what that looked like http://www.3dmark.com/fs/19756996

Mystery Machine award, they never seen that setup before.​

 
Last edited:
The only true board I ever owned that would do what your asking is X58 Evga 3way Sli 758e, I lucked up buying late and got the board that made the Xeon 5660x a drop in without the mod, 1.2v board.

Slot 1 and 2 was full x16,slot 3 was x8, I could CX with 1 and 3 for better airflow and what that looked like http://www.3dmark.com/fs/19756996

Mystery Machine award, they never seen that setup before.​


Yeah, x58 was technically HEDT though, so more PCIe lanes than regular consumer.

I did something similar with X79 and later TRX40, both of which were HEDT.

Unfortunately HEDT seems to be dead. Current Threadrippers and Xeons are Workstation platforms. HEDT no longer seems to exist.

In lieu of HEDT I'm trying to figure out what I can do with consumer parts, because workstation parts are not going to work for me.

It should be possible judging by how many higher end boars have dual dedicated electrical x4 ports (in physical 16x slots) but at least thus far, no board I have found offers that configuration yet. Maybe this is a limitation of the chipset? offering two groups of x4, but not the ability to combine them to a single 8x.

The best of both worlds would be if these 8 lanes were offered across two slots in an 8x/0x or 4x/4x configuration, like they do with the primary x16 slot and secondary slot, but maybe that is asking too much.

It would add a little cost to design it that way, but once you do that one board would work for everyone, rather than needing multiple versions. The question is which choice would have a lower overall cost.
 
Yeah, x58 was technically HEDT though, so more PCIe lanes than regular consumer.

I did something similar with X79 and later TRX40, both of which were HEDT.

Unfortunately HEDT seems to be dead. Current Threadrippers and Xeons are Workstation platforms. HEDT no longer seems to exist.

In lieu of HEDT I'm trying to figure out what I can do with consumer parts, because workstation parts are not going to work for me.

It should be possible judging by how many higher end boars have dual dedicated electrical x4 ports (in physical 16x slots) but at least thus far, no board I have found offers that configuration yet. Maybe this is a limitation of the chipset? offering two groups of x4, but not the ability to combine them to a single 8x.

The best of both worlds would be if these 8 lanes were offered across two slots in an 8x/0x or 4x/4x configuration, like they do with the primary x16 slot and secondary slot, but maybe that is asking too much.

It would add a little cost to design it that way, but once you do that one board would work for everyone, rather than needing multiple versions. The question is which choice would have a lower overall cost.
Just need to find a wizbang on aliexpress that takes two m.2 slots and then aggregates them into a single x8.
 
Looks like you're stuck with running the graphics card at x8. Actually, at PCIe 4 or 5 speeds, that's not bad, even at 4k resolution.
 
I have an idea...granted, I don't know much about QSFP, so not sure how good of an idea this is...

It looks like you can get either a intel 810 series NIC or a mellanox connectx 5 ex series nic. Now they say they are 100Gbps cards. Sooo, they run at PCIe 4.0 x16. You're not after that much bandwidth, so you slap that into one of the electrically x4 slots. That may get you the bandwidth you're after. Maybe?
 
Unfortunately HEDT seems to be dead. Current Threadrippers and Xeons are Workstation platforms. HEDT no longer seems to exist.
Reading this thread, I realized that I should be running an HEDT, even if I wasn't aware of that until now. Just for grins I searched on "ASUS HEDT" Nothing except ROG and gaming machines.
 
I guess I'm just going to have to wait then.

Until I can find a motherboard that both boasts top tier consumer CPU performance in low threaded workloads AND allows me full one full 16x (gen 4+) and one full 8x (gen3+) electrically, at the same time I'm just not going to upgrade.

I was ready to jump on the new Threadrippers, but unlike the 3xxx series, the performance is just terrible in non-workstation loads wjhen compared to their consumer counterparts.

Fingers crossed Zen5 and its associated new chipset brings more options along this line. They could start by bumping up the chipset links to actually support Gen5 instead of Gen4, which should provide enough bandwidth to offer some more flexibility to Motherboard designers.
Same. Driving me nuts. The workstation performance is great, but lord is the non-workstation... weak. Not terrible, mind you, but weak weak weak.
Yeah, x58 was technically HEDT though, so more PCIe lanes than regular consumer.

I did something similar with X79 and later TRX40, both of which were HEDT.

Unfortunately HEDT seems to be dead. Current Threadrippers and Xeons are Workstation platforms. HEDT no longer seems to exist.

In lieu of HEDT I'm trying to figure out what I can do with consumer parts, because workstation parts are not going to work for me.
Same struggle I'm having. I'm getting close to "gonna have to live with x4s on the third and fourth slot."
It should be possible judging by how many higher end boars have dual dedicated electrical x4 ports (in physical 16x slots) but at least thus far, no board I have found offers that configuration yet. Maybe this is a limitation of the chipset? offering two groups of x4, but not the ability to combine them to a single 8x.

The best of both worlds would be if these 8 lanes were offered across two slots in an 8x/0x or 4x/4x configuration, like they do with the primary x16 slot and secondary slot, but maybe that is asking too much.

It would add a little cost to design it that way, but once you do that one board would work for everyone, rather than needing multiple versions. The question is which choice would have a lower overall cost.
Let me know if you find one
Reading this thread, I realized that I should be running an HEDT, even if I wasn't aware of that until now. Just for grins I searched on "ASUS HEDT" Nothing except ROG and gaming machines.
Zenith II Extreme Alpha was the last real one. The others are all WS boards now (Sage line). They're damned fine boards, but VERY workstation focused.
 
The one piece of technology that AMD could be proud of is no SLi or CX ribbon was needed, and it worked even on RX 570/580.

If the CX driver still exist, what would 5800x 3D look like?
 
Last edited:
PCIe switches got real expensive.
They also don't support bifurcation which defeats the purpose in a lot of cases. The last "consumer" board with ones was an X299 board I had which was neat but not being able to use the 4x m.2 cards was kind of a bummer. (supermicro also had a z490 board with 8888 capability but I didn't own one)
 
Could you look into doing Thunderbolt 4 into QSFP? I know there’s stuff out there geared toward the Mac market but not sure what the bandwidth really looks like in practice on it.
 
Maybe the guys who designed the X670E chiipset didn't fully understand use cases.
They knew exactly what they were doing- separating the market between consumer and workstation. This started with LGA2011 when dual-CPU models couldn't be overclocked unlike the LGA1366 dual-CPU brethren.
The only true board I ever owned that would do what your asking is X58 Evga 3way Sli 758e, I lucked up buying late and got the board that made the Xeon 5660x a drop in without the mod, 1.2v board.

Slot 1 and 2 was full x16,slot 3 was x8, I could CX with 1 and 3 for better airflow and what that looked like http://www.3dmark.com/fs/19756996

Mystery Machine award, they never seen that setup before.​

X79 could do it. 990FX could do it. When multi-GPU setups fell out of favor and PCI-E controllers moved onto the CPU, Intel and AMD saw no reason to give consumers x16 + x8 configurations. It became a way to differentiate the consumer and workstation markets.
 
X79 could do it. 990FX could do it. When multi-GPU setups fell out of favor and PCI-E controllers moved onto the CPU, Intel and AMD saw no reason to give consumers x16 + x8 configurations. It became a way to differentiate the consumer and workstation markets.
I would be OK with buyuing a workstation board if there were entry-level models priced like medium priced consumer boards. Or maybe there needs to be a new category of boards, between today's consumer and workstation markets. I for one don't need a dual CPU setup. (Back in the day I had an AMD A7M-266D with dual CPUs. I used the "pencil trick" to get the CPUs to run in MP mode.)
 
I would be OK with buyuing a workstation board if there were entry-level models priced like medium priced consumer boards. Or maybe there needs to be a new category of boards, between today's consumer and workstation markets. I for one don't need a dual CPU setup. (Back in the day I had an AMD A7M-266D with dual CPUs. I used the "pencil trick" to get the CPUs to run in MP mode.)
Eh, that's what X58 and X79 were (AMD didn't really compete back then). Intel and AMD either decided that the market was too small or they could make more money forcing enthusiasts into true workstation platforms if they wanted the additional capability. The prosumer motherboards that existed with X58 and X79 are gone now and they don't show any sign of coming back soon, which is a shame. Maybe if multi-GPU gaming were able to make a comeback but we pretty much know that's not going to happen. The only niche scenario I could see for multi-GPU gaming is with 3D where a GPU is tasked to each eye.
 
They knew exactly what they were doing- separating the market between consumer and workstation. This started with LGA2011 when dual-CPU models couldn't be overclocked unlike the LGA1366 dual-CPU brethren.

X79 could do it. 990FX could do it. When multi-GPU setups fell out of favor and PCI-E controllers moved onto the CPU, Intel and AMD saw no reason to give consumers x16 + x8 configurations. It became a way to differentiate the consumer and workstation markets.

Yeah, I've suspected this is on purpose too.

You want to use a second 8x PCIe card, spend thousands on a workstation product, fool.

I wouldn't be opposed to spending on a workstation product.

My only concern is that current workstation products - while great at workstation stuff - underperform in most typical client workloads and games, probably due to no lober being able to offer both unregistered and registered RAM on the same motherboard with DDR5, like they could with DDR4 and earlier.

The side effect of this has been to kill HEDT once and for all. In the past they could make a high end CPU product and since registered and unregistered RAM before DDR5 was pin compatible the end user could decide if they wanted it to be more of a workstation (go with registered ECC) or more of a HEDT system (go with kloverclocked screaming fast unregistered, non-ECC)
Now today, if you are an AMD or Intel designing a workstation-like product and have to choose to make it compatible with either registered or unregistered RAM, you are probably going to choose registered every time, and when you do the higher RAM latency resultant from both the registered buffer and the ECC is going to absolutely kill performance for client/gaming stuff.

It's a real shame.


Anyway, I still hope I am wrong about the forced segmentation and that AMD with the new chipsets for Zen5 finally move to Gen5 PCIe lanes for the four lanes that go to the chipset, which gives motherboard makers more flexibility for secondary PCIe slots off of that chipset.

I hope that at least one of them will offer a single 8x slot (Gen3 or higher) instead of more plentiful 4x slots.

🤞
 
My only concern is that current workstation products - while great at workstation stuff - underperform in most typical client workloads and games, probably due to no lober being able to offer both unregistered and registered RAM on the same motherboard with DDR5, like they could with DDR4 and earlier.

The side effect of this has been to kill HEDT once and for all. In the past they could make a high end CPU product and since registered and unregistered RAM before DDR5 was pin compatible the end user could decide if they wanted it to be more of a workstation (go with registered ECC) or more of a HEDT system (go with kloverclocked screaming fast unregistered, non-ECC)
Now today, if you are an AMD or Intel designing a workstation-like product and have to choose to make it compatible with either registered or unregistered RAM, you are probably going to choose registered every time, and when you do the higher RAM latency resultant from both the registered buffer and the ECC is going to absolutely kill performance for client/gaming stuff.

It's a real shame.

It is. I have to wonder how they do their market research. Now I know that some guys are buying the least expensive mboard they can, but there are also guys who would (and are fortunate enough spend some of their hard-earned dollars, euros and pounds to get a board with extra lanes and PCI-E slots.
Anyway, I still hope I am wrong about the forced segmentation and that AMD with the new chipsets for Zen5 finally move to Gen5 PCIe lanes for the four lanes that go to the chipset, which gives motherboard makers more flexibility for secondary PCIe slots off of that chipset.

I have to ask the guys who read this post. Do you think there is any way that we can make our voices heard? Anyone have the right contacts at AMD or Intel?

Me, I've always bought ASUS since I was in kindergarten. :p But if there was a HEDT chipset and ASUS didn't bring out any products, but say ARock or MSI did, then I would buy the other manufacturer's board. Heck, if only Intel but not AMD had a HEDT chipset, I would go over to the dark side.
I hope that at least one of them will offer a single 8x slot (Gen3 or higher) instead of more plentiful 4x slots.
Agreed. How about two 8X slots and two 16x slots.
 
Agreed. How about two 8X slots and two 16x slots.

I wonder if they could do that with current sockets or of they'd need a total redesign with more pins for more lanes.

Off the top of my head, they way AM5 currently works is that you have a grand total of 24 lanes.

(I'm using AMD as an example here because I am less familiar with Intels configuration. The most recent I played with was my Rocket Lake Xeon, and how Intel handles chipset bandwidth on that is both confusing and kind of bad, as performance on the m.2 slots off the chipset was awful)

Four of those 24 lanes are consumed by the chipset.

Then you have 16 for a GPU and 4 for your primary m.2 slot.

Anything beyond that has to come off of the chipsets bandwidth from those first four lanes.

So the chipset typically has some sort of PCIe switch in it, allowing it to pool the bandwidth from those 4 lanes, and dole it out to lots of things (which if they were discrete expansion would have to be at least 1x cards and be less flexible.) USB, SATA, sound, etc. The. They can also assign what bandwidth is left to extra m.2 slots or PCIe slots.

Current chipsets operate at Gen4 despite the CPU being capable of Gen5 (which feels a little wasteful)

That means the chipset gets 4x 1.97GB/s = 7.88GB/s to play with.

So some current boards have two secondary 4x Gen3 slots (in addition to the.main 16x and m.2 slots). If maxed out those represent 8x 985MB/s = also 7.88GB/s.

That means that if you use those two slots to their max, there is no bandwidth left for anything else the chipset does. I mean, I usually disable onboard sound, onboard Ethernet/WiFi and oboard SATA, anyway but at the very least I'd need bandwidth for USB.

...and that's before you consider that there are usually secondary m.2 slots off of the chipset that share that bandwidth as well. So these dual Rx slot boards are a seriously blocking design when it comes to the bandwidth from those 4x Gen4 chipset lanes.

They probably get away with this for the same reason the ISP gets away with it by not having the sum of every subscribers bandwidth upstream, the same reason on your network your upstream port on your switch gets away with being smaller than the sum of all other ports on the switch.

It is exceedingly rare for all parts of the system to be maxed out at the same time, meaning that while yes - the system has a blocking configuration on chipset bandwidth, in practice you are probably barely ever going to notice. At least if done right.

That said, I'd argue that depending on how many secondary m.2 slots there are and are in use, this configuration is probably already pushing it.

Now, if they were to make the new Zen5 chipsets Gen5 capable, since the CPU's already support Gen5, all of a sudden you have twice the bandwidth to play with. 4x 3.94GB/s, = 15.76GB/s.

Depending on how many secondary m.2 slots you have/use, two 8x Gen3 slots is already getting to the same levels of greedy as two 4x slots on a Gen4 capable chipset.

That said, it does give you - the user - more flexibility on how to use the chipset bandwidth, and if one understands the limitations and dont have unrealistic expectations, that is great. (But usually companies avoid products with this type of "extra flexibility which includes the flexibility to use it wrong and get poor results" products, because then idiots who don't read manuals or don't understand how things work of course use it wrong, make a big stink, hurt the reputation of the company, etc. these lowest common denominator users ruin it for all of us.)


While I would - of course - love something like x79/x99 where I had 40 latest gen lanes in it, I'd be reasonably happy with a future Gen5 chipset board with a single secondary 8x Gen3 slot off the chipset for my more advanced networking desires.

I'd still disable on board sound, SATA and Ethernet/WiFi, but I would definitely be using USB and might use some secondary m.2 slots, so that extra chipset bandwidth would come in handy.

The irony here is that while extra PCIe lanes are becoming more and more difficult to get your hands on, the demand for them has only been going up with more and more demand for NVMe storage in the last few years.

It used to be that you put one fancy expensive NVMe drive in your system, and if you needed more storage, you used a couple of cheaper SATA drives. Now NVMe (especially Gen3 drives) are just as cheap, and sometimes cheaper than sata drives, meaning lots of people would love more m.2 slots, and the powers that bee insist on giving us only 24 lanes in total. Its a crying shame.

If it were up to me, the bare minimum on consumer systems would be 40 lanes. If you need more than that, then you can graduate to a Workstation product.

All of that said, AMD is married to AM5 for at least another two generations (if they follow the case ce of AM4) so the likelihood of us seeing a design that has more PCIe lanes to the CPU is unlikely any time soon.

So the best we can hope for - in the short to medium term - is an upping of the AMD chipset to Gen5 lanes, and the chipset design allowing that extra bandwidth to be used more flexibly so at least one of the 100 motherboard designs from 5 brands has a secondary 8x slot that does not interfere with the primary 16x slot.

Heck, I don't even care if they save money/bandwidth elsewhere by omitting on board sound, SATA, Ethernet and WiFi, because if it comes with those things, I'm only going to disable them anyway.
 
Off the top of my head, they way AM5 currently works is that you have a grand total of 24 lanes.
AM4 was 24, AM5 is 28. There's an extra x4 for a second m.2 or a cpu direct slot. I don't think you can split it out to say 4 x1 slots though.
It used to be that you put one fancy expensive NVMe drive in your system, and if you needed more storage, you used a couple of cheaper SATA drives. Now NVMe (especially Gen3 drives) are just as cheap, and sometimes cheaper than sata drives, meaning lots of people would love more m.2 slots, and the powers that bee insist on giving us only 24 lanes in total.

I think I've seen a lot of newer designs where sata shares serdes pins, so you're picking a sata port or an pci-e lane. I guess you could probably have lots of nvme if you were ok with one lane per --- which might actually be ok for a lot of uses.

Honestly, I just don't see a lot of x8 slots in your future. If I'm a creative OEM, I think I'm still going to give you a 5.0x16 slot from the cpu, and then I've got 3 5.0x4 to play with. You're a niche user, so no chipset for you. The CPU has 4x usb3.2 gen2, and one usb 2.0, and HD Audio; that covers basic input/output; probably usb hubs will fill in anything else you need, and if not too bad.

So what to do with the 3 x4s? Gotta give you one 5.0x4 for expensive nvmeI need some pci-e switch chips that are magically inexpensive... Then I can take another 5.0x4 for nvme and give you two 4.0x4 m.2 slot. For the last x4, I'd give you two 4.0x4 slots in mechanical x16 slots. Because more people are going to be happy with two slots than one x8 slot. Maybe skip the 5.0 m.2 slot, and do two 4.0x4 m.2 and four 4.0x4/mechanical x16 slots. Then you can plug in lots of cards.
 
AM4 was 24, AM5 is 28. There's an extra x4 for a second m.2 or a cpu direct slot. I
Correct. My board, for example. I find block diagrams the best way to visualize this kind of thing.

A lot of boards just come down to how certain I/O decisions are made. Do I think i'm really going to make use of USB4? So far, I haven't. Though being able to run a display off the iGPU directly from one of the Type C ports is pretty nice.

1712865771945.png
 
Correct. My board, for example. I find block diagrams the best way to visualize this kind of thing.

A lot of boards just come down to how certain I/O decisions are made. Do I think i'm really going to make use of USB4? So far, I haven't. Though being able to run a display off the iGPU directly from one of the Type C ports is pretty nice.

View attachment 647094

I wonder if the chipset even allows for combining the likes of M.2_3 and M.2_4 into one 8x slot (without the motherboard vendor adding an additional expensive PCIe switch), or if their 4x nature is hard coded.
 
I wonder if the chipset even allows for combining the likes of M.2_3 and M.2_4 into one 8x slot (without the motherboard vendor adding an additional expensive PCIe switch), or if their 4x nature is hard coded.
I highly doubt it, you'd likely need switches and/or re-drivers to do something like that.

Frankly, i've wanted the ability to say, for example, instead of offering x8 lanes of Gen5 on a fullsize x16 slot, have it still wired for x16 lanes and if you simply run it in Gen4 mode, you get all x16 lanes accessible. I just think the hardware to do something like this would price these boards out of reach and into true workstation class territory. Some of them already are, TBH.
 
I highly doubt it, you'd likely need switches and/or re-drivers to do something like that.

Frankly, i've wanted the ability to say, for example, instead of offering x8 lanes of Gen5 on a fullsize x16 slot, have it still wired for x16 lanes and if you simply run it in Gen4 mode, you get all x16 lanes accessible. I just think the hardware to do something like this would price these boards out of reach and into true workstation class territory. Some of them already are, TBH.

To be honest, I'd pay for it. I want to go consumer (but with a secondary 8x that doesn't impact the primary slot) not because of the lower price, but because I want the combination of top tier low threaded performance (which just isn't available from modern workstation products) and the ability to install at least one enterprise class PCIe 8x expansion carc at the same time.

The price isn't completely unimportant to me, even I have my limitations, but I'd be willing to pay a significant premium for this capability, because the alternative is building two machines. One workstation for my day to day use, and one consumer for everything else, and that is expensive too, and would prevent me from using the 8x slot for consumer/gaming stuff.

I can't help but think that there that there other like me out there as well. The traditional [H] types who used to buy extreme edition Intel CPU's and x58, x79, x99 and x299 platforms to have the true HEDT "best of both worlds, just without ECC" experience.

Are we the majority of the market? Hell no.

But in a crowded field of a hundred different motherboards for each generation of CPU fro AMD and Intel, there ought to be room for at least one such niche product. And yes, it would come at a premium, but I'm OK with that. Take a current pro-sumer level (but not ROG christmas/diso light ultra gamerlicious monstrosity) and add this feature, and that is easily worth a $500 premium. Wouldn't even think twice, provided there aren't any other serious drawbacks to the board.

I've even looked into any PCIe switch addon/riser boards that could take all the bandwidth of a secondary Gen 4 m.2 slot and turn it into a single Gen3 x8 slot. They have the same bandwidth. A PCIe switch would introduce some microseconds of latency, but it wouldn't be a big deal. Sadly I haven't found anyone who makes anything like that.

The problem seems to be that ever since Broadcom bought PLX technologies in an attempt to corner the PCIe switching market, the relative lack of competition has driven the pricing of modern PCIe switches (and you'd need the modern ones or they wouldn't support Gen 4 or Gen5) through the absolute roof.

PCIe switches used to be relatively cheap and common in the Gen 3 era. Heck, I bought a gen 3 "8x PCIe slot slot to two 4x u.2 ports" adapter that uses a PCIe switch (to avoid needing motherboard bifurcation support) and that shit cost like $35 brand new. These days even the lowest end PCIe switches with the fewest number of lanes sell for $400-$700 for just the chip, before ity winds up on a board. At least to buyers who can't enter into volume purchasing agreements.
 
Threadripper 7945WX nearly matches a Ryzen 7900:

https://www.amd.com/en/products/processors/desktops/ryzen.html#tabs-0eb49394b2-item-446166865a-tab
Ryzen™ 9 7900AMD Radeon™ Graphics1224Up to 5.4 GHz3.7 GHzAMD
https://www.amd.com/en/products/processors/workstations/ryzen-threadripper.html#shop
AMD Ryzen™ Threadripper™ PRO 7945WXDiscrete Graphics Card Required
# of CPU Cores12
# of Threads24
Max. Boost ClockUp to 5.3 GHz
Base Clock4.7 GHz
Thermal Solution (PIB)Not Included
Default TDP350W

The boost clock is only 100Mhz lower, base is actually 1Ghz higher. With a good cooler, you could probably make up the 1Mhz boost difference.

Edit: Kinda sucks they don't seem to sell'm retail.
 
Threadripper 7945WX nearly matches a Ryzen 7900:

https://www.amd.com/en/products/processors/desktops/ryzen.html#tabs-0eb49394b2-item-446166865a-tab

https://www.amd.com/en/products/processors/workstations/ryzen-threadripper.html#shop


The boost clock is only 100Mhz lower, base is actually 1Ghz higher. With a good cooler, you could probably make up the 1Mhz boost difference.

Edit: Kinda sucks they don't seem to sell'm retail.

It's not just the clocks.

All of the Threadrippers now only take Registered ECC RAM. (they used to take either in the past, but with DDR5 registered and unregistered are no longer pin compatible.

Also the chiplet layouts on most of the Threadrippers have negative impacts on lightly threaded performance as well.

I always considered these to have rather limited impact, but in reviewing the launch reviews for Threadripper 7xxx, while they absolutely rand away with highly threaded true workstation loads, I was surprised at just how poorly they actually perform in lightly threaded/consumer/gamer loads.

Essentially, don't buy one of these if you play games.

Though most reviewers focused on scientific, AI, rendering and other workstation loads (probably at the behest of AMD) the ones that did, showed that there is a huge performance penalty in that type of workload:
1712871033159.png
1712870984955.png


The beastly Threadripper 7980x is outperformed by a Ryzen 5 7600 and a generation older Ryzen 7 5800x3D

Meanwhile, at launch, my Threadripper 3960x was mostly on par with the similarly clocked (but way fewer threads) consumer Ryzen 7 3800x.

There were even some cases where the 3960x paid off and performed better in certain titles than its consumer counterpart. One such title was the extremely RAM speed sensitive Starfield, where the massive amount of L3 cache that Threadripper got (and possibly also its quad channel RAM) made it perform akin to a never released Ryzen 7 3800x3D, thus making it playable (judging by 60fps minimum at all times) in that title where the 3800x was not.

Now the roles are reversed. The new Threadrippers are fantastic for workstation stuff, but are kind of really bad for consumer stuff, making my brand of "no sacrifices multipurpose HEDT machine that does everything except ECC (but could do ECC if I decided to swap out the RAM and eat the performance penalty" no longer work at all with these new chips.

I mean, games are not my only priority in CPU shopping, but at the same time, if I am going to go out there and spend $1,799 (or more) on a CPU in a $1,200 (or more) motherboard, I'd rather not be outperformed by a $200 CPU in a $150 motherboard.

I want one halo machine that does it all with no compromises. And they have pretty much made sure that I cannot have that this generation, and that makes me bummed out as that has been my entire build philosophy for 20 years now since I graduated college and had the budget that could make that happen.

Prior to that I had more of a "bang for the buck" philosophy to match my lower budget, and instead I made up for it with overclocking. But back then we also didn't have to worry about stupid shit like not getting enough expansion.

This was my Abit KR7A-RAID I used in the second half of college:

1712871857124.png


It was a $137 Motherboard when new. Nothing really performed any better (though I bought cheaper lower end CPU's and overclocked the shit out of them, instead of the top end ones). Notably, it came with all the expansion anyone could have needed or wanted in 2002.

With most AGP GPU's being single slot back then, you could still fit 6 PCI cards in there if you wanted to, yet still had USB and four IDE headers for up to 8 drives on board! Heck, it even had four fan headers!


Yeah, there may not have been sound or networking on board, but if you ask me that just made things more interesting, as you could customize and prioritize the components YOU wanted, not whatever some board manufacturer got a good deal on and decided to include on board.... (I mean, there were only two USB ports, but that was really all anyone needed back then, as the only USB products were really mice and keyboards. But, you also got not one, but two COM ports. Pretty cool huh? :p )

I want a modern version of THAT. Subtle, understated, no mood lighting, no fancy decorative heatsinks that don't serve a practical cooling purpose. Heck, they don't even have to include the on board IDE/SATA storage interfaces. I won't use them anyway.

That is the pinnacle of PC motherboard design right there.
 
Last edited:
  • Like
Reactions: Nobu
like this
90+ 1% lows seems more than fine to me, but I'm used to <60fps on med/high settings. lol

The Dell WS boards have 2x 16 and 3x 8 lane pcie slots, fwiw, and they're on sale right now (I configured a basic system for ~$1600 $1900, wouldn't let me go without ssd and I'm not paying $200 for a 500GB drive).
 
Last edited:
The z790 chipset has 28 PCIe lanes (20 lanes PCIe 4.0, 8 lanes PCIe 3.0) isn't that enough?
 
The z790 chipset has 28 PCIe lanes (20 lanes PCIe 4.0, 8 lanes PCIe 3.0) isn't that enough?
No. Well, if they were utilized differently, yes. But not the way they are, not for what OP wants to do.
 
90+ 1% lows seems more than fine to me, but I'm used to <60fps on med/high settings. lol

Yeah, but that's at the time of purchase.

When you buy something like this you are probably going to have it for 5 years. A motherboard and CPU will last through 2-3 GPU upgrades, so you need a little bit of headroom.

I don't want to invest thousands in something that is "enough" now, especially when consumer parts are so much faster. As always happens, when performance is available, next gen titles start taking advantage of that performance, making something that is adequate right now, more quickly inadequate in a few years.

Neither Baldurs Gate nor Cyberpunk are the harshest title on CPU's right now though. (GPU load possibly, but not CPU) They just happened to be the benchmarks I could find.

It would be interesting to see how theae Threadrippers perform in Starfield wandering through New Atlantis or Akila City or that upcoming title Dragons Dogma 2 which reportedly has some insane CPU load due to a novel new approach to NPC AI. I bet we aren't talking 90+ 1% minimums..
 
Yeah, but that's at the time of purchase.

When you buy something like this you are probably going to have it for 5 years. A motherboard and CPU will last through 2-3 GPU upgrades, so you need a little bit of headroom.

I don't want to invest thousands in something that is "enough" now, especially when consumer parts are so much faster. As always happens, when performance is available, next gen titles start taking advantage of that performance, making something that is adequate right now, more quickly inadequate in a few years.

Neither Baldurs Gate nor Cyberpunk are the harshest title on CPU's right now though. (GPU load possibly, but not CPU) They just happened to be the benchmarks I could find.

It would be interesting to see how theae Threadrippers perform in Starfield wandering through New Atlantis or Akila City or that upcoming title Dragons Dogma 2 which reportedly has some insane CPU load due to a novel new approach to NPC AI. I bet we aren't talking 90+ 1% minimums..
True. But to be fair, they didn't test the 7945wx. If it's a single CCD processor, or even if it uses two, it would likely be significantly faster than a 4 or 8 CCD threadripper in gaming workloads.
 
True. But to be fair, they didn't test the 7945wx. If it's a single CCD processor, or even if it uses two, it would likely be significantly faster than a 4 or 8 CCD threadripper in gaming workloads.

Maybe. We'd still have the Registered ECC performance hit, but it might actually be livable. I'd have to see the numbers.

I'm going to have to Google and see if I can find something on that. I know they don't market them to consumers, but sometimes you can find OEM parts anyway.

Of course if they are anything like their EPYC brethren they might be vendor locked :/
 
Last edited:
I can't help but think that there that there other like me out there as well. The traditional [H] types who used to buy extreme edition Intel CPU's and x58, x79, x99 and x299 platforms to have the true HEDT "best of both worlds, just without ECC" experience.

Are we the majority of the market? Hell no.

Agreed but we probably want to buy products with higher profit margins,
But in a crowded field of a hundred different motherboards for each generation of CPU fro AMD and Intel, there ought to be room for at least one such niche product.

You would think. I don't have any specific knowldge, but if I look at the ASUS product lineup,it seems they must have some kind of software that allows them to tweak a basic design into multiple released products,with different feature sets and price points, And a manufacturing process to match.
And yes, it would come at a premium, but I'm OK with that. Take a current pro-sumer level (but not ROG christmas/diso light ultra gamerlicious monstrosity)

Yeah. I buy ROG boards for the build quality and features,but all the RGB lighting stuff is wasted on me. My case does not have a clear side panel.
and add this feature, and that is easily worth a $500 premium.

Over what? A premium,sure. Maybe $100-250.
Wouldn't even think twice, provided there aren't any other serious drawbacks to the board.
^^
The problem seems to be that ever since Broadcom bought PLX technologies in an attempt to corner the PCIe switching market, the relative lack of competition has driven the pricing of modern PCIe switches (and you'd need the modern ones or they wouldn't support Gen 4 or Gen5) through the absolute roof.

OK, we need a competitor here.
These days even the lowest end PCIe switches with the fewest number of lanes sell for $400-$700 for just the chip, before ity winds up on a board. At least to buyers who can't enter into volume purchasing agreements.
I'm with Zarathustra[H] on this one. He articulates the issues better than I can.
 
Back
Top