PCI Express 7.0 Spec Hits Draft 0.3, 512GBps Connectivity on Track For 2025 Release

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,929
2025 isn't soon enough!

"While we traditionally think of PCIe first and foremost as a bus routed over printed circuit boards, the standard has always allowed cabling as well. And with their newer standards, the PCI-SIG is actually expecting the use of cabling in servers and other high-end devices to grow, owing to the channel reach limitations of PCBs, and how it’s getting worse with higher signaling frequencies. So, cabling is being given a fresh look as an option to maintain/extend channel reach with the latest standards, as new techniques and new materials are creating new options for better cables.

To that end, the PCI-SIG is developing two cabling specifications, which are expected to be ready for release in Q4 of this year. The specs will cover both PCIe 5.0 and PCIe 6.0 (since the signaling frequency is unchanged), with specifications for both internal and external cables. Internal cabling would be to hook up devices to other parts within a system – both devices and motherboards/backplanes – while external cabling would be used for system-to-system connections.

In terms of signaling technologies and absolute signaling rates, PCI Express trails Ethernet by a generation or so. And that means that much of the initial development on high speed copper signaling has already been tackled by the Ethernet workgroups. So, while work still has to be done to adapt these techniques for PCIe, the basic techniques have already been proven, which helps to simplify the development of the PCIe standard and cabling by a bit.

All told, cable development is decidedly a more server use case of the technology than what we see in the consumer space. But a cabling standard is still going to be an important development for those use cases, especially as companies continue to stitch together ever more powerful systems and clusters."

1686691993260.png

Source: https://www.anandtech.com/show/1890...12gbps-connectivity-on-track-for-2025-release
 
Transitioning over to PAM4 from NRZ is actually a big deal, that sort of signal processing isn't cheap, and the signaling is relatively easy to interfere with in comparison, meaning they likely need to sandwich the lanes there into a series of separate shielded layers to ensure stability. $$$
Way too rich for my blood, but will be fun to see what they do with it all the same.
 
So they're doubling the frequency from PCIE6? Will PCB materials still be used for this, or is this now coax/differential cabling only territory?
 
  • Like
Reactions: erek
like this
Transitioning over to PAM4 from NRZ is actually a big deal, that sort of signal processing isn't cheap, and the signaling is relatively easy to interfere with in comparison, meaning they likely need to sandwich the lanes there into a series of separate shielded layers to ensure stability. $$$
Way too rich for my blood, but will be fun to see what they do with it all the same.

Don't worry, Asus will make it available on its highest end motherboards at a charge of at least $100 per lane, no warranty on signal integrity or longevity, with use of speeds greater than PCIe 2.0 voiding the whole board's warranty. By the same token Gigabyte will charge $75 per lane, but use PCB materials that literally crumble apart in your hands like a 2,000 year old piece of tomb papyrus and any signal traces disrupted in the process will be considered non-covered physical damage caused by the user.
 
Transitioning over to PAM4 from NRZ is actually a big deal, that sort of signal processing isn't cheap, and the signaling is relatively easy to interfere with in comparison, meaning they likely need to sandwich the lanes there into a series of separate shielded layers to ensure stability. $$$
Way too rich for my blood, but will be fun to see what they do with it all the same.
It is going to take more expensive equipment, agreed.
Hopefully by 2025+ those parts will have become more moderately priced and available to make this less bleeding edge in both availability and cost.

For those who would like to understand NRZ vs PAM4 here is a great article detailing them both.
PAM4 can do what NRZ does in half the time, which is very impressive but definitely comes at a financial cost.

nrz-v-pam4-bits.png
 
Don't worry, Asus will make it available on its highest end motherboards at a charge of at least $100 per lane, no warranty on signal integrity or longevity, with use of speeds greater than PCIe 2.0 voiding the whole board's warranty. By the same token Gigabyte will charge $75 per lane, but use PCB materials that literally crumble apart in your hands like a 2,000 year old piece of tomb papyrus and any signal traces disrupted in the process will be considered non-covered physical damage caused by the user.
Im not even sure we'll see any consumer boards use PCIe7. Or maybe 10 years after it's become standard on servers.
 
  • Like
Reactions: erek
like this
It is going to take more expensive equipment, agreed.
Hopefully by 2025+ those parts will have become more moderately priced and available to make this less bleeding edge in both availability and cost.

For those who would like to understand NRZ vs PAM4 here is a great article detailing them both.
PAM4 can do what NRZ does in half the time, which is very impressive but definitely comes at a financial cost.

View attachment 576602

won't hit the server market until at least 2029 assuming they hit the 2025 draft finalization they want and likely will never make it to the consumer market outside of specialized board unless desktops drastically change in layout standardization.

edit, i'd also be surprised if we ever see pcie 6.0 on consumer boards as well, there's just no feasible reason for it to ever exist outside of the server market.
 
  • Like
Reactions: erek
like this
won't hit the server market until at least 2029 assuming they hit the 2025 draft finalization they want and likely will never make it to the consumer market outside of specialized board unless desktops drastically change in layout standardization.

edit, i'd also be surprised if we ever see pcie 6.0 on consumer boards as well, there's just no feasible reason for it to ever exist outside of the server market.
I am not so sure that is true but I do think it is a solid decade out, I am yet to see anything consumer-facing that even gets close to saturating PCIe5.
 
  • Like
Reactions: erek
like this
So they're doubling the frequency from PCIE6? Will PCB materials still be used for this, or is this now coax/differential cabling only territory?

At least you've got to use PCB to get from the cpu socket to the connector. Looking at server announcements, a lot of lanes are going to cable connectors right now and not too many lanes to actual slots on the motherboard. Mostly to service storage, but sometimes remote slots are connected via cables rather than a riser board.
 
At least you've got to use PCB to get from the cpu socket to the connector. Looking at server announcements, a lot of lanes are going to cable connectors right now and not too many lanes to actual slots on the motherboard. Mostly to service storage, but sometimes remote slots are connected via cables rather than a riser board.
You never know, maybe future server cpus will look like this:

1686760227968.png


https://www.samtec.com/cables/high-speed/assemblies/si-fly

Would cost a pretty penny.
 
At least you've got to use PCB to get from the cpu socket to the connector. Looking at server announcements, a lot of lanes are going to cable connectors right now and not too many lanes to actual slots on the motherboard. Mostly to service storage, but sometimes remote slots are connected via cables rather than a riser board.
Yeah, it's getting hard to fit 150+ PCIe lanes in a board that is populated and not get all sorts of cross-talk between them and power lanes.
The systems are too noisy, I have seen some demos with fiberoptic PCIe connectors, they had minor latency issues with the signal converters when I saw the demo (6 years ago?) but I believe that was resolved not too long after.
 
You never know, maybe future server cpus will look like this:

View attachment 576712

https://www.samtec.com/cables/high-speed/assemblies/si-fly

Would cost a pretty penny.
I have heard talk of OEMs wanting Intel, and possibly AMD to work on a consumer-focused OAM-like socket.
Intel and AMD would then be building a contained SoC that the OEM's would simply mount on a generic board.
If you pop open a laptop or even some servers you will see daughter or sub-boards all over connected via different proprietary riser cables, to solve the PCIe lane issues in confined spaces and save costs.
They are looking for standards so they can work on cutting costs and this is just one more step in that direction.
Because I assure you that for as much as we get annoyed at Intel constantly changing their sockets the OEMs are far more so.
 
  • Like
Reactions: erek
like this
We don't even have PCI-E 5.0 devices in the consumer market, yet, and we're already working on getting 7.0 out in less than 2 years? I'm sure this kind of bandwidth is more important in the enterprise and research space than it is consumer.
 
So they're doubling the frequency from PCIE6? Will PCB materials still be used for this, or is this now coax/differential cabling only territory?
It supposedly works on both, I mean copper is copper, and if the width of the trace is equal to the surface diameter of a cable then everything is equal in the end, how they do it is by greatly varying the signal instead of just basic highs and lows it allows for some mixes of voltage to represent different binary combinations so instead of a single bit being represented it allows for a pair to be represented at a time. Where it really shines though is when using fiber-based breakout cables.

So the min and max voltages for the signal don't differ between the two, but PAM4 allows for a low, low-mid, mid-high, and high stage, where NRZ is just low, and high.

You need much more sensitive IO controllers on each side to pull it off and you need to be incredibly precise in your transmission of the signals as well because there is much less room for error.
In NRZ you might have a 4v gap between high and low states so a 1.5v would still read as a 1v low and a 4.5v would read as a 5v high.
But in PAM4 would a 1.5v read as the 1v low or the 2v mid-low?
 
Last edited:
  • Like
Reactions: erek
like this
We don't even have PCI-E 5.0 devices in the consumer market, yet, and we're already working on getting 7.0 out in less than 2 years? I'm sure this kind of bandwidth is more important in the enterprise and research space than it is consumer.
Yeah, the leading edge on this isn't for consumers. Always having a pipeline for better is a good way to keep the PCI-E consortium together though. Same thing happens with mpeg codecs: release the new one, start on the next one, don't worry too much about adoption, it will come when it comes.
 
It supposedly works on both, I mean copper is copper, and if the width of the trace is equal to the surface diameter of a cable then everything is equal in the end, how they do it is by greatly varying the signal instead of just basic highs and lows it allows for some mixes of voltage to represent different binary combinations so instead of a single bit being represented it allows for a pair to be represented at a time.
edit: I'm not an expert on this subject, but here's what I've seen:

Typically most PCB manufacturing methods are not ideal for high speed signals. Let me you give you one example: PCBs are typically fiberglass and resin. The fiberglass is layered as a fabric with 90 degree fibers. The fiberglass has a different dielectric coefficicient from the resin. PCIe traces are small enough that when they are printed on the PCB they have chance of having more fiberglass close to them or more resin. A single differential pair could have 1 wire printed over a fiber, and the other wire printed over mostly resin, introducing skew between the 2 signals. Not ideal.

The solution at my last company was to have the manufactuerer reorient the entire weave at non 90 degree angles compared to the board, so that on average all of the pcie signals cross many fibers and are not alligned to any one fiber. Another solution is to not have PCIe links at 90 degree angles on the board itself.

More expensive PCB materials don't have as much of a problem with this (although it might still be a problem, not sure)

https://www.nwengineeringllc.com/article/what-is-the-fiber-weave-effect-in-a-pcb-substrate.php
 
Last edited:
There total raw bandwith, but there is also lane amount option that could be interesting with faster lane.

A single PCI5 lane can drive a fast enough for nice nvme drive for example (the fastest PCI 3.0x4 drive type of max write-read speed and they could be better at everything else and be all around excellent), many video cards could work in x4.

As long as more than 1 lane is being use with a PC item, it would be possible for fastest one to have value (but obviously the equation is not simple and there is many cost to the faster affair and making hardware that work well only on the latest motherboard, etc...).

Imagine how much connectivity you get with 20 CPU lane and 8 chipset lane if your drive take 1, 40gbs Ethernet adapter take 1, with the GPU taking only 4x you had a lot of them left and it make pcie type ram usage possible for some not latency critical scenario usage.
 
Last edited:
  • Like
Reactions: erek
like this
how they do it is by greatly varying the signal instead of just basic highs and lows it allows for some mixes of voltage to represent different binary combinations so instead of a single bit being represented it allows for a pair to be represented at a time.
Sorry missed this on my first read. Yes what you say is true and I agree, but I was referring to PCIe7. There is no signalling difference between PCIe6 and PCIe7 (they both use PAM4), yet PCIe7 is twice the throughput. So I'm assuming that means PCIe7 is running at twice the frequency, which is where my cable vs PCB concerns come in.
 
  • Like
Reactions: erek
like this
Sorry missed this on my first read. Yes what you say is true and I agree, but I was referring to PCIe7. There is no signalling difference between PCIe6 and PCIe7 (they both use PAM4), yet PCIe7 is twice the throughput. So I'm assuming that means PCIe7 is running at twice the frequency, which is where my cable vs PCB concerns come in.
Well the article says this is done with extra layers and shorter traces.

However, PCIe 7.0 will require shorter traces to achieve those speeds, so motherboards could be more expensive as they'll likely require extra components and thicker PCBs.
 
edit, i'd also be surprised if we ever see pcie 6.0 on consumer boards as well, there's just no feasible reason for it to ever exist outside of the server market.
Sure there is....

it's called grub, moolah, greenz, duckybuks, milk, etc....

As soon as any of the component makers figures out the best way to justify their ever-increasing prices and lure in the obligatory "bigger number = must be better" crowd, they will do it, regardless of any other concerns :(
 
Back
Top