PCI-E lanes

Nvme drives use 4 lanes per drive. Boards with multiple nvme slots would then need to feed each slot x4 lanes. MB's will have to get those lanes from either the cpu or chipset, config depends on the cpu and chipset, etc. Keep in mind that just cuz it has 3 nvme slots, that doesn't mean its very useful since not many cpus actually have the lanes to to drive those slots, so more often than not it is going to be up to the chipset and this isn't a great way to do it.

The ultra quad card is what is known as a bifurication card, essentially stacking two or more nvme slots onto a single card. These cards work thru bifurication, which is a way to split a pcie slot's lanes into smaller pieces W/O the use of hardware pcie switches. For ex, it takes x16 lanes and divides them into four x4 partitions which then feeds four nvme drives. Bifurication is also dependent on the platform, so you cannot just stick a bifur card into any motherboard. And to top it off, spending x16 pcie lanes on desktop platforms if really damn expensive lane wise so it's not something dekstop platform users are often into. This is geared for HEDT and workstation users.
 
Really depends on the motherboard. For most modern boards the first m.2 slot has four dedicated lanes, usually from the CPU, and any additional slots will use lanes from the chipset that could either be dedicated or shared. Using an m.2 drive in a slot that shares lanes usually disables something on the board (e.g. a card slot, a SATA controller etc)

The card you linked does require four lanes for each m.2 device. If you plug it into an x8 slot, only two m.2 devices will work.
 
Really depends on the motherboard. For most modern boards the first m.2 slot has four dedicated lanes, usually from the CPU, and any additional slots will use lanes from the chipset that could either be dedicated or shared. Using an m.2 drive in a slot that shares lanes usually disables something on the board (e.g. a card slot, a SATA controller etc)

The card you linked does require four lanes for each m.2 device. If you plug it into an x8 slot, only two m.2 devices will work.
If your board has two m.2 device sockets, but you are using only one, what happens to the PCI-E lanes for the second socket?
 
Let's try and clear this up. M.2 drives, are PCI-Express devices. Even their SATA counterparts, technically use PCIe lanes as the slots for them are PCIe based in nature. Even modern SATA ports use PCIe lanes. In any case, there are some real differences between how AMD and Intel drive their PCIe lanes where M.2 based storage is concerned. Having said that, while those technical differences exist, the actual difference in the real world only applies when comparing PCIe 4.0 to PCIe 3.0. The former is only available on the AMD side while Intel is still stuck on PCIe 3.0 for now. For the moment, I'll only cover how this works on consumer based platforms like Z490 and X570. Things are different for HEDT systems as they have far more PCIe lanes provided by the CPU.

Intel systems have 16 PCIe lanes provided by the CPU. The rest are attached the PCH, or motherboard chipset. These are the lanes that service your M.2 based storage as your 16x PCIe lanes are reserved for expansion slots, primarily, graphics cards. All of your M.2 devices are bottlenecked by the PCH's connection to the CPU, which is the DMI 3.0 link. Think of this as 4x PCIe 3.0 lanes. This is a real concern for RAID arrays as write speeds are negatively impacted with two drives and reads negatively impacted with three being used. Effectively, there is no scaling with a third drive on Intel's mainstream platform.

On the AMD side, things are only a little better. There are 4x dedicated PCIe lanes provided by the CPU for storage. This services your primary M.2 slot. Usually marked as M.2_1 by most manufacturers or something to that effect. The other slots are serviced by the PCH, just as Intel does but they have 4x PCIe 4.0 lanes between the PCH and the CPU. Therefore, they have far more potential bandwidth. That said, in reality, this rarely matters as it usually takes benchmarks to actually showcase the bandwidth limitations for most workloads. SSD's are so fast that the limitations end up being more theoretical than practical.

Now, there are general purpose PCIe lanes on the AMD side which can be repurposed for M.2 via the CPU. The reality is, the motherboard manufacturers get the choice on how the lanes are allocated. If you read this far, you know that I mentioned AMD having 4x PCIe 4.0 lanes allocated to the CPU for M.2 devices specifically. This isn't necessarily the case as AMD allows this to be used in other ways. Motherboard makers do have the option to split this into 2x M.2 slots with two PCIe 4.0 lanes each or use this for dedicated SATA ports. Fortunately, no motherboard maker I know does this. However, it remains an option. Futhermore, if a motherboard does allow bandwidth sharing of the extra 8x PCIe lanes that come from the CPU, there has to be switches involved to use them this way and it would eat bandwidth otherwise allocated to traditional expansion slots.

Another option is to use an adapter card to adapt M.2 slots to PCIe slots. The amount of lanes used for this is up to the design of the card. An 8x card would allow the use of two M.2 drives with 4x PCIe lanes each. These would be serviced by the CPU directly. However, as I said, servicing between the CPU and PCH really doesn't matter in practice, despite the former having an obvious advantage on paper.

Nvme drives use 4 lanes per drive. Boards with multiple nvme slots would then need to feed each slot x4 lanes. MB's will have to get those lanes from either the cpu or chipset, config depends on the cpu and chipset, etc. Keep in mind that just cuz it has 3 nvme slots, that doesn't mean its very useful since not many cpus actually have the lanes to to drive those slots, so more often than not it is going to be up to the chipset and this isn't a great way to do it.

This is untrue. At least, based on the way this seems to be worded. I know I'm splitting hairs here. All PCIe devices can use fewer lanes than their maximum allocation. They do not need 4x lanes to work. They only need 4x lanes to work at full speed. Again, the CPU is rarely going to be responsible for feeding the M.2 slots directly as these usually end up serviced by the PCH which you already eluded to. However, even in the HEDT world, this isn't necessarily true on Intel's side as you need vROC to fully leverage the ability to use the CPU for this. (Bullshit, though that may be.)

The ultra quad card is what is known as a bifurication card, essentially stacking two or more nvme slots onto a single card. These cards work thru bifurication, which is a way to split a pcie slot's lanes into smaller pieces W/O the use of hardware pcie switches. For ex, it takes x16 lanes and divides them into four x4 partitions which then feeds four nvme drives. Bifurication is also dependent on the platform, so you cannot just stick a bifur card into any motherboard. And to top it off, spending x16 pcie lanes on desktop platforms if really damn expensive lane wise so it's not something dekstop platform users are often into. This is geared for HEDT and workstation users.

Yes indeed.

Really depends on the motherboard. For most modern boards the first m.2 slot has four dedicated lanes, usually from the CPU, and any additional slots will use lanes from the chipset that could either be dedicated or shared. Using an m.2 drive in a slot that shares lanes usually disables something on the board (e.g. a card slot, a SATA controller etc)

The card you linked does require four lanes for each m.2 device. If you plug it into an x8 slot, only two m.2 devices will work.

Incorrect. Intel platforms do not feature dedicated PCIe lanes which service M.2 slots. They still only provide 16x PCIe lanes. The Intel Core i9-10900K still only offers 16x PCIe 3.0 lanes from the CPU's PCIe controller. Where you are likely incorrect is regarding the installation of a card with fewer than the ideal number of PCIe lanes and adapting four PCIe based M.2 drives. It's likely that all four will work, but at reduced speed. They'll be bottlenecked by the slot itself at that point but all four should show up. That's typically how PCI-Express works. You can use fewer lanes even when it isn't ideal. We see this all the time from graphics cards and other devices.

If your board has two m.2 device sockets, but you are using only one, what happens to the PCI-E lanes for the second socket?

Nothing. They remain available unless that secondary slot shares that bandwidth with another device. When I'm speaking of sharing, what I typically mean is an either or type of situation. You can use the last 2x SATA ports out of your available six, or you can use an M.2 device but not both as one disables the other. If the slot doesn't share bandwidth and remains unused, then those lanes remain available on the PCH and do nothing.
 
Let's try and clear this up. M.2 drives, are PCI-Express devices. Even their SATA counterparts, technically use PCIe lanes as the slots for them are PCIe based in nature. Even modern SATA ports use PCIe lanes. In any case, there are some real differences between how AMD and Intel drive their PCIe lanes where M.2 based storage is concerned. Having said that, while those technical differences exist, the actual difference in the real world only applies when comparing PCIe 4.0 to PCIe 3.0. The former is only available on the AMD side while Intel is still stuck on PCIe 3.0 for now. For the moment, I'll only cover how this works on consumer based platforms like Z490 and X570. Things are different for HEDT systems as they have far more PCIe lanes provided by the CPU.

Intel systems have 16 PCIe lanes provided by the CPU. The rest are attached the PCH, or motherboard chipset. These are the lanes that service your M.2 based storage as your 16x PCIe lanes are reserved for expansion slots, primarily, graphics cards. All of your M.2 devices are bottlenecked by the PCH's connection to the CPU, which is the DMI 3.0 link. Think of this as 4x PCIe 3.0 lanes. This is a real concern for RAID arrays as write speeds are negatively impacted with two drives and reads negatively impacted with three being used. Effectively, there is no scaling with a third drive on Intel's mainstream platform.

On the AMD side, things are only a little better. There are 4x dedicated PCIe lanes provided by the CPU for storage. This services your primary M.2 slot. Usually marked as M.2_1 by most manufacturers or something to that effect. The other slots are serviced by the PCH, just as Intel does but they have 4x PCIe 4.0 lanes between the PCH and the CPU. Therefore, they have far more potential bandwidth. That said, in reality, this rarely matters as it usually takes benchmarks to actually showcase the bandwidth limitations for most workloads. SSD's are so fast that the limitations end up being more theoretical than practical.

Now, there are general purpose PCIe lanes on the AMD side which can be repurposed for M.2 via the CPU. The reality is, the motherboard manufacturers get the choice on how the lanes are allocated. If you read this far, you know that I mentioned AMD having 4x PCIe 4.0 lanes allocated to the CPU for M.2 devices specifically. This isn't necessarily the case as AMD allows this to be used in other ways. Motherboard makers do have the option to split this into 2x M.2 slots with two PCIe 4.0 lanes each or use this for dedicated SATA ports. Fortunately, no motherboard maker I know does this. However, it remains an option. Futhermore, if a motherboard does allow bandwidth sharing of the extra 8x PCIe lanes that come from the CPU, there has to be switches involved to use them this way and it would eat bandwidth otherwise allocated to traditional expansion slots.

Another option is to use an adapter card to adapt M.2 slots to PCIe slots. The amount of lanes used for this is up to the design of the card. An 8x card would allow the use of two M.2 drives with 4x PCIe lanes each. These would be serviced by the CPU directly. However, as I said, servicing between the CPU and PCH really doesn't matter in practice, despite the former having an obvious advantage on paper.



This is untrue. At least, based on the way this seems to be worded. I know I'm splitting hairs here. All PCIe devices can use fewer lanes than their maximum allocation. They do not need 4x lanes to work. They only need 4x lanes to work at full speed. Again, the CPU is rarely going to be responsible for feeding the M.2 slots directly as these usually end up serviced by the PCH which you already eluded to. However, even in the HEDT world, this isn't necessarily true on Intel's side as you need vROC to fully leverage the ability to use the CPU for this. (Bullshit, though that may be.)



Yes indeed.



Incorrect. Intel platforms do not feature dedicated PCIe lanes which service M.2 slots. They still only provide 16x PCIe lanes. The Intel Core i9-10900K still only offers 16x PCIe 3.0 lanes from the CPU's PCIe controller. Where you are likely incorrect is regarding the installation of a card with fewer than the ideal number of PCIe lanes and adapting four PCIe based M.2 drives. It's likely that all four will work, but at reduced speed. They'll be bottlenecked by the slot itself at that point but all four should show up. That's typically how PCI-Express works. You can use fewer lanes even when it isn't ideal. We see this all the time from graphics cards and other devices.



Nothing. They remain available unless that secondary slot shares that bandwidth with another device. When I'm speaking of sharing, what I typically mean is an either or type of situation. You can use the last 2x SATA ports out of your available six, or you can use an M.2 device but not both as one disables the other. If the slot doesn't share bandwidth and remains unused, then those lanes remain available on the PCH and do nothing.

i suppose I was a little too liberal with”usually.” My most recent few systems have been AMD based.


As far as the card linked is concerned, you can see the traces for the lanes going straight to the m.2 slots, and there’s a dip switch that changes which pin gets grounded to tell the system how many lanes are in use. So in this particular case, Having four drives on an 8x slot would not work as expected. Skimming through the manual give me the impression that some sort of PCIe bifurcation is required too. edit: looks like bifurcation was brought up above.

There are likely other cards that use PCIe switching to make use of various lane configurations provided by the motherboard slot. It’s not something I’ve look much into though.
 
Last edited:
Really depends on the motherboard. For most modern boards the first m.2 slot has four dedicated lanes, usually from the CPU, and any additional slots will use lanes from the chipset that could either be dedicated or shared. Using an m.2 drive in a slot that shares lanes usually disables something on the board (e.g. a card slot, a SATA controller etc)

The card you linked does require four lanes for each m.2 device. If you plug it into an x8 slot, only two m.2 devices will work.
It more depends on Intel vs AMD... Intel has only 16 CPU lanes for use, normally for GPU, then all NVME drives share the chipset link (which is a DMI 3.0, aka x4 3.0 link). AMD has 20 CPU lanes for uses so the first NVME is direct CPU while the rest share the chipset link (which on b550 and x570 is a shared x4 4.0 link, so double the bandwidth of Intel). B550 chipset only gives/splits to pcie 3.0 lanes from there, while x570 splits to pcie 4.0 lanes.

Most of the the cheap pcie nvme adapters just split the lanes and require support, I haven't used or researched these to much so I'll leave it to others.
 
I am not running any Nvme drives on my MSI B550m Mortar but I am running a RX 5500 XT 8Gb in x8 by PCI Express 4.0 and maybe the way it was built to run as why so later of release date .
 
Ok Thank you. I read everything and now i have a much clearer picture of how things work.
 
I am not running any Nvme drives on my MSI B550m Mortar but I am running a RX 5500 XT 8Gb in x8 by PCI Express 4.0 and maybe the way it was built to run as why so later of release date .
Why are you running only x8 for your GPU? Do you have something else splitting your pcie lanes?
 
Why are you running only x8 for your GPU? Do you have something else splitting your pcie lanes?
No. It's the limitation of the RX 5500 series GPU hardware. These lower-end RDNA GPUs only run at x8 bandwidth electrically no matter what. One would need to step up to the RX 5600 series (among RDNA Navi GPUs) just to even use 16 lanes.
 
No. It's the limitation of the RX 5500 series GPU hardware. These lower-end RDNA GPUs only run at x8 bandwidth electrically no matter what. One would need to step up to the RX 5600 series (among RDNA Navi GPUs) just to even use 16 lanes.
Ahh, my bad. I forgot about that. Probably ok considering pcie 4.0 x8 is the same as pcie 3.0 x16. Although, pcie bandwidth tends to be more important for lower end/lower memory GPUs.
 
Ahh, my bad. I forgot about that. Probably ok considering pcie 4.0 x8 is the same as pcie 3.0 x16. Although, pcie bandwidth tends to be more important for lower end/lower memory GPUs.

I wanted to try a RX 5500 8Gb and 4.0 x 8 I believe is how these card where made to run at with there release date being so much later to allow enough x570 chipsets to flood the market .. anyway it's been a smooth running ,
 
I got a fun addition to this question. I recently upgraded to an X570 taichi because I wanted the 3x m.2 slots and PCI-E lanes for other purposes. (Also the 8x SATA ports)
This is all with a Matisse 3900X

So, according to the spec sheet:
http://www.asrock.com/mb/AMD/X570 Taichi/#Specification
If m.2_3 is occupied, pci-e 5 is disabled (the one at the very bottom of the board.) This seems pretty straight forward, but the board doesn't seem to read a 4th m.2 if I put it into PCIE 3 (the 2nd 16x slot) when all 3 other m.2 slots are occupied. I have my GPU in PCI-e 1 (16x)

(I do have the lanes set to Auto in the UEFI, I'll split them manually and see if that works.)

I'm gonna try moving the GPU to PCI-E 3 and move the 4th nvme to PCI-E 1 and see if it reads it then.

Furthermore, a question, mechanically, if I put a pci-e 4.0 m.2 into a riser card made for 3.0 (in a 4.0 slot) will it still get enough power to function? I haven't done this yet, just curious before I try.

Does using a PCI-E 3.0 device in a 4.0 slot still use the same amount of lanes? So are my 3.0 nvme's still counting as 4x pcie 4.0 lanes?

Thanks!

Edit: Update, Was able to get my 4th NVME to be usable when I moved the GPU to PCI-E 3 and the NVME riser to PCIE-1. My PCIE-4.0 nvme is running at rated speeds in Crystalbenchmark.
 
Last edited:
Back
Top