IOMMU Group 28: 0a:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7) 0a:00.1 Audio device : Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0] IOMMU Group 29: 0b:00.0 VGA compatible controller : NVIDIA Corporation GP108 [GeForce GT 1030] [10de:1d01] (rev a1) 0b:00.1 Audio device : NVIDIA Corporation GP108 High Definition Audio Controller [10de:0fb8] (rev a1)
What am I looking for in lspci -vv? I have the latest BIOS. Do you mean a custom BIOS?I have worked with a similar board: X9DRi-LN4F
That one needed a BIOS update for bifurcation to work properly.
I assume you have a Linux system available. 'sudo lspci -vv' will provide more insight. Especially the amount of root ports and the link capabilities section of those ports.
Toggling the bifurcation correctly adjusts it to x4 but it only shows the first drive. The rest of them go missing.You are welcome to post the output here. Have it set to x16 and check the root ports. They will show a LnkCap of 8GT, x16.
If you set it to x4x4x4x4 you should see this drop to x4 and also there should be 4 root ports instead of 1 with LnkCap of x4.
04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM951/PM951 (rev 01) (prog-if 02 [NVM Express]) Subsystem: Samsung Electronics Co Ltd Device a801 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 32 Region 0: Memory at dfa00000 (64-bit, non-prefetchable) [size=16K] Region 2: Memory at dfa04000 (32-bit, non-prefetchable) [size=256] Capabilities:  Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities:  MSI: Enable- Count=1/8 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Capabilities:  Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 25.000W DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend- LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM not supported, Exit Latency L0s <4us, L1 <64us ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+ LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR+, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+ EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest- Capabilities: [b0] MSI-X: Enable+ Count=9 Masked- Vector table: BAR=0 offset=00003000 PBA: BAR=0 offset=00002000 Capabilities: [100 v2] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [148 v1] Device Serial Number 00-00-00-00-00-00-00-00 Capabilities: [158 v1] Power Budgeting <?> Capabilities: [168 v1] #19 Capabilities: [188 v1] Latency Tolerance Reporting Max snoop latency: 0ns Max no snoop latency: 0ns Capabilities: [190 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- L1_PM_Substates- Kernel driver in use: pciback Kernel modules: nvme
I've looked around, it seems like the overwhelming consensus is that the LN4F works but the F will not. Something about a faulty bifurcation implementation. Holding out hope for someone to counteract this.The interesting part is the root ports, not the device. You can pm the complete output of lspci with x16 Mode and x4x4x4x4 Mode and I will go through. Anyway I assume the bifurcation setting is not implemented correctly.
This sounds a bit like @marcosscrivens issues with his B450, you could try to raise a support ticket with asrock, they are quite helpful.Hello, I have bifurcation riser 8x8 and asrock x399 taichi motherboard, BIOS version 3.8, in this BIOS version there is an opportunity to switch to x8x8 mode, but the problem is that one video card works on x8 and the other one falls into pcie 2.0 x1, while x4x4 mode works great, How to solve this problem does anyone know?
I have a crazy request.
@C_Payne: Would it be possible to have a PCB in the shape of an ATX motherboard (although not as deep, i.e. just the part below the IO shield) where one could just slide the cards in without hassle, i.e. without having to turn 90° (all the PCIe slots, e.g. 8 slots, and then you have 2-3 entrance points for the "uplink" extension cables) ? Otherwise I'll have to do this using the RSC-R2U-2E8 or smth similar, although I guess it's going to be limited to PCIe 2.0 and it's going to be quite a hack with drilling holes etc.
it seems that the problem was in the additional power supply 8-pin for bifurcation riser 8x8, I just forgot to connect it) and apparently because of this there was a burnout of the first three pins on the motherboard slot PCIE x16 and bifurcation riser or it can not be? the most interesting thing is that bifurcation riser continues to work only at a lower speed
Does anyone here know of any options for attaching two or more M.2 PCIe SSDs to a single M.2 slot? Could be either through bifurcation (if any motherboards support M.2 bifurcation), or even better would be using a PCIe switch.
The low profile PCIe x8 Broadcom HBA 9400-16i 'Tri-Mode' seems to support 4 no. NVMe drives at x4 (8 no. at x2, and up to 24 no. with expanders).
This card would normally be limited to PCIe x8 (there is a x16 version), but if connected into PCIe x4 slot from M.2 then I wonder if this works, but just limited to x4 speed.
I have just ordered one from ebay to test.
Quite pricey solutions though.
ETA of 9400-16i end of next week (from US to UK)
Broadcom confirmed the HBA would work with a PCIe x4 electrical connection (x8 physical slot) but with max speed halved.
I've successfully connected a U.2 Optane 905P to an M.2 socket using that 'red pcb M.2-U.2 adapter' and the cable supplied with the drive.
I have the 'green pcb M.2 socket to PCIe x4 adapter' and x4 cable, but not tried yet, as I used one of C_Payne's x8x8 Bifurcation Risers instead.
ADT-Link do provide some useful flexible risers, I'll probably get one of those next time. My x4 cable is 3M, M.2 part is 'JSER'.btw if you want to skip a step in M.2 -> ribboned 4x slot you could get this: https://www.aliexpress.com/item/32860198563.html?spm=a2g0s.90423188.8.131.52db4c4djZarSS
$ lspci 00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07) 00:02.0 Display controller: Intel Corporation Device 3e96 00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th Gen Core Processor Gaussian Mixture Model 00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10) 00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10) 00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10) 00:15.0 Serial bus controller [0c80]: Intel Corporation Device a368 (rev 10) 00:15.1 Serial bus controller [0c80]: Intel Corporation Device a369 (rev 10) 00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10) 00:16.1 Communication controller: Intel Corporation Device a361 (rev 10) 00:16.4 Communication controller: Intel Corporation Device a364 (rev 10) 00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10) 00:1b.0 PCI bridge: Intel Corporation Device a340 (rev f0) 00:1b.6 PCI bridge: Intel Corporation Device a32e (rev f0) 00:1c.0 PCI bridge: Intel Corporation Device a338 (rev f0) 00:1c.1 PCI bridge: Intel Corporation Device a339 (rev f0) 00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port 9 (rev f0) 00:1e.0 Communication controller: Intel Corporation Device a328 (rev 10) 00:1f.0 ISA bridge: Intel Corporation Device a309 (rev 10) 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10) 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10) 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10) 02:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04) 03:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41) 05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) 06:00.0 Non-Volatile memory controller: Device 1d97:1160 (rev b0)
Also I have good news:
My PLX x16->x8x8x8x8 board is finally in a state were I am willing to sell it. Its not perfect yet and I will make a larger one with more spacing in the next weeks.
View attachment 219006
Please be aware that its super time consuming to test and assemble, also the PCBs and Parts are a lot more expensive compared to my bifurcation boards.
Hence the price.
There is of course some small latency increase, but since a packet can be transferred twice as fast compared to x4 this will be offset back to a lower latency I guess.
Other then that the uplink is x16, and each downstream device can get x8.
If all were to transmit simultaniously, each one would only get x4, but this will be a rare occurrence.
Also cards can do direct transfer, IMHO crossfire uses this for example and I am sure some other workloads as well, but this is not really something I know much about.
It should still show up in lspci, does bandwidth in the LnkCap section reduce for the root port?
If it's like my Asrock Rack E3C246D2I then there's a jumper to select whether the OCuLINK port is PCIe x4 or SATA. Maybe they could do it in software on the X570D4I-2TI hope someone tried to get a 4x pcie slot out of the oculink ports, because I have found conflicting info on if those are pcie and sata compatible or just sata carrying plugs.
Again my Intel board is the same - they provide a 24-pin to 4-pin adapter, it's basically to provide the soft power function - the board has the 8-pin adapter for the remaining power needs.Also no 24 pin connector is super weird....
First off, a thank you to all who have been participating in this thread over the last couple of years. I've gone through this thread 3-4 times start to finish in the last year or so while working through my own setup.
I've successfully done Bifurcation on my ASRock X470 Gaming-ITX/ac with a Ryzen 5 1600. It boots, it sees both cards, if I run Folding@Home it will utilize both GPUs, that part is great.
What I have been struggling with, is getting SLI to work. I'm using a pair of PNY GTX 760s (760's because I've had them since new, with waterblocks, and never actually used them in anything other than temporary builds and case review photos, but also because their size is right for what I'm doing).
The strange thing is that in nVidia control panel, if I boot the system up with an SLI bridge, I don't get any of the SLI options, and running a full screen benchmark does not utilize the second card.
However, if I boot the system without an SLI bridge, it tells me "Connect an SLI bridge to enable blah blah blah", and I see the SLI options, but they're disabled (grayed out).
Is this something that anyone here has run in to? If my Vega Nano's were shorter height wise I'd use those instead, but that's not really an option for what I'm doing...
From what I recall, SLI has to be certified by the motherboard to work. Motherboards have to be submitted to Nvidia for certification and then can have it allowed. ITX boards make little to no sense to justify certification so they do NOT support SLI.
HOWEVER, there are instances of people modifying things to make SLI work. I just can't find any instances at the moment because I'm at work.