When did the CPU lane thing start?

ZodaEX

Supreme [H]ardness
Joined
Sep 17, 2004
Messages
4,893
I see people sometimes talk about how modern motherboards have a limited number of PCI-E lanes directly connected to the CPU.
When did this practice first start? I feel like 15 years ago this was never discussed.
Do Sandy Bridge boards have CPU connected PCI-E lanes?
 
I guess back then you just had a single GPU, now we have NVME drives along with a GPU and other PCIe Add-on cards.

you do remember when using more than 1 GPU that it would do 8x 8x instead of 16x, right?
 
I guess back then you just had a single GPU, now we have NVME drives along with a GPU and other PCIe Add-on cards.

you do remember when using more than 1 GPU that it would do 8x 8x instead of 16x, right?

I remember some SLI boards would do that. Both my Sandy Bridge and Athlon II rigs don't disable any lanes though, even if I populate every single expansion slot. (some of their slots are original PCI though)
 
A long time ago you had a bus which connected the CPU to the northbridge, and another bus which connected to the southbridge. (ed: or a ring bus connecting all three) Back then, those buses (and the chipsets) were the primary bottleneck (assuming you didn't have a shit GPU or on-board graphics).

Eventually the legacy buses were phased out, but you still have PCIe buses between the CPU and the PCIe slots, and between the CPU and board components (which may also control some of the slots).

The primary differences between now and then are topology and bus technology. Now (more) devices have more direct access to the CPU and memory, but there is still some segmentation.
 
Last edited:
I see people sometimes talk about how modern motherboards have a limited number of PCI-E lanes directly connected to the CPU.
When did this practice first start? I feel like 15 years ago this was never discussed.
Do Sandy Bridge boards have CPU connected PCI-E lanes?
I think it was either Nehalem or Sandy Bridge that began moving the PCIe controller onto the CPU package. Not 100% sure.
 
  • Like
Reactions: Wat
like this
The CPU lane thing actually started with Lynnfield (first-gen i5/i7 on socket LGA 1156 from 2009). Prior to Lynnfield, the CPU had no PCI-E hub at all, but instead connected to a separate northbridge chip that provided its own primary PCI-E lanes.

The Lynnfield on-CPU-die PCI-E hub ran its 16 PCI-E slots at PCI-E 2.0 clocks but only half-duplex operation. CPUs introduced since then ran their PCI-E lanes at full duplex.
 
Last edited:
The CPU lane thing actually started with Lynnfield (first-gen i5/i7 on socket LGA 1156 from 2009). Prior to Lynnfield, the CPU had no PCI-E hub at all, but instead connected to a separate northbridge chip that provided its own primary PCI-E lanes.

The Lynnfield on-CPU-die PCI-E hub ran its 16 PCI-E slots at PCI-E 2.0 clocks but only half-duplex operation. CPUs introduced since then ran their PCI-E lanes at full duplex.

That's exactly what I wanted to know. Thanks a lot for that interesting information! These days I upgrade less and less often, so sometime I tend to miss out on the latest hardware developments/improvements because I tend to not read up on hardware until I feel the need to upgrade.
 
There were also more boards with PLX chips back then (effectively a PCIE switch) as more things had to be in add in cards. those chips have gone up in price and are manufactured by only one place now.
 
The CPU lane thing actually started with Lynnfield (first-gen i5/i7 on socket LGA 1156 from 2009). Prior to Lynnfield, the CPU had no PCI-E hub at all, but instead connected to a separate northbridge chip that provided its own primary PCI-E lanes.

Oh man, I remember when mobos had a northbridge. Didn't they also have a southbridge? For I/O or something.
 
Yes but back then they were called video cards not this new annoying GPU buzzword!
Darn, I have still been calling them video cards. When did we stop calling them that?

I always thought that a GPU was specifically the processing core of the video card (hence the abbreviation graphics processing unit). Obviously, the video card is more than a processor since it has memory, cooling, video ports, etc... The GPU is without dispute the most important part of a video card and so I can understand that it would just be an abstraction to refer to the entire thing as a GPU. That said, in my opinion, calling a video card a GPU is kind of like calling a PC a CPU. It just feels weird.
 
Same with the word "Drive" or "Disk" in SSDs or SD cards when there are no motors to drive nor a disk which may infer something that is circular and flat.
 
when pci-e was introduced in 03...

not quite

Intel and AMD put all the lanes on the south-bridge until nehalem:

p55block.jpg


CoreArchi_575px.gif


PRIOR, ALL YOU GOT WAS DMI ON NORTHBIDGE (Which GOT ROLLED BY HYPERTRANSPORT).

back in the day, hardly anyone pushed pcie i/o - they made the change because f they already had to add more lanes to feed the cHip set...might as well make IT DIRECT-MAPPED
 
Last edited:
Oh. And the growth of HEDT for market segmentation helped too.

Helped and hurt for a small period of time. I could be incorrect, but LGA 2011-v3 consumer CPU was the first time we also saw segmentation in the CPU lanes as a defining feature of the product stack. LGA 2011 you could buy an i7 3820 at the bottom of the food chain and a 3960x at the top while both contained 40 PCIe lanes. Fast forward to 2011-v3 and that 5820k you just purchased only had 28 lanes at a price of $389-396. Compare that to buying a 5930k, the next SKU up for nearly $200 more ($583-$594 MSRP), which had a whole 0.1 GHz higher boost clock, but had 40 PCIe lanes. This was similar with LGA 2066 and the bottom two SKU. At least we're back to a space now where product stacks tend to offer the same core specs, but just vary in terms of cores, frequency, and cache.
 
I see people sometimes talk about how modern motherboards have a limited number of PCI-E lanes directly connected to the CPU.
When did this practice first start? I feel like 15 years ago this was never discussed.
Do Sandy Bridge boards have CPU connected PCI-E lanes?
It was the Lynnfield and architecture CPU's that first integrated a PCIe controller on die if I recall correctly. When PCIe was first introduced all of the lanes were on the PCH and were separate from the CPU.
 
Helped and hurt for a small period of time. I could be incorrect, but LGA 2011-v3 consumer CPU was the first time we also saw segmentation in the CPU lanes as a defining feature of the product stack. LGA 2011 you could buy an i7 3820 at the bottom of the food chain and a 3960x at the top while both contained 40 PCIe lanes. Fast forward to 2011-v3 and that 5820k you just purchased only had 28 lanes at a price of $389-396. Compare that to buying a 5930k, the next SKU up for nearly $200 more ($583-$594 MSRP), which had a whole 0.1 GHz higher boost clock, but had 40 PCIe lanes. This was similar with LGA 2066 and the bottom two SKU. At least we're back to a space now where product stacks tend to offer the same core specs, but just vary in terms of cores, frequency, and cache.
I meant between consumer and not consumer, but yea. That was also stupid
 
GPU started with NVIDIA - if you look at the history they took a HUGE risk back in the early 2000s to go with parallelism and introduce CUDA. It has meaning and is not just some fluff PR term.
 
I feel like this started when we moved the Northbridge over to the CPU.


this is precisely right,, as I covered in my post

early pcie implementations wasted a ton of GPU -> CPU bandwidth, all in the name of cutting costs of adding too many high-speed i/o lanes directly off the CPU

but, on th upside, Intel's north-bridge memory controller meant that they could perform bulk reads without any overhead!

but yes, single chip memory controller plus x16 pcie + x4 dmi was THE Way (TM) come 2009 (and they added tyhe GPU with Sandy Bridge):

742618_p55block.jpg
 
Last edited:
this is precisely right,, as I covered in my post

early pcie implementations wasted a ton of GPU -> CPU bandwidth, all in the name of cutting costs of adding too many high-speed i/o lanes directly off the CPU

but, on th upside, Intel's north-bridge memory controller meant that they could perform bulk reads without any overhead!

Negative. There were literally zero PCIe lanes on CPUs until Lynnfield and Clarkdale on the server side. The north bridge did traditionally do both, acting as an I/O hub and memory controller. However, all lanes went through the south bridge or PCH with the introduction of Lynnfield.

The diagram you linked is from a later generation where it was already integrated.
 
Last edited:
Negative. There were literally zero PCIe lanes on CPUs until Lynnfield and Clarkdale on the server side. The north bridge was literally a memory controller and nothing else. Until Lynnfield, all PCIe lanes went through the south bridge or PCH.

The diagram you linked is from a later generation. Nehalem first integrated the memory controller but not the PCIe controller.


sorry man, forgot DMI is bridge interconnect only - you still had potential issues overloading external devices from the southbridge, but the pciee lanes all went through CPU bus.

but AMD had a similar set of issue to what I described as the memory controller was on the CPU, and the GPU lanes was on a bridge chip
 
sorry man, forgot DMI is bridge interconnect only - you still had potential issues overloading external devices from the southbridge, but the pciee lanes all went through CPU bus.

but AMD had a similar set of issue to what I described as the memory controller was on the CPU, and the GPU lanes was on a bridge chip

It was a potential to oversaturate DMI, but it rarely happened in practice. It mostly happened in benchmarking more than the real world.
 
Back
Top