I've been doing extensive research for a work related project. Its a niche application that needs a boatload of PCIe lanes. Between using bridge\switch chips or a motherboard with PCIe bifurcation, the bifurcation route seems to be not only far cheaper but also more reliable providing the correct parts are selected from the start. I've seen time and time again people splitting 8x and 16x slots into 8x and 4x increments by making use of custom risers and compatible BIOS firmware. My question is this;
Is there some specific technical reason you cant bifurcate a 16x or 8x or 4x slot into a large number of 1x lane groups?
For example the application I am looking into could far more benefit from 16 individual 1x lanes from a 16x slot than it would 4 4x lanes. The same holds true for 8x and 4x slots with the expectation that only 8 or 4 1x lanes would be available respectively. For clarification the project relates to accessing NVME drives in large number. Throughput is not the issue at hand and I understand that an NVME drive with a single 1x lane will run at 25% the rated speed. Nowhere in the PCI Express Card Electromechanical, PCI Express miniCard Electromechanical, or Serial ATA Specifications is there a requirement that NVME drives make use of only 4x lanes to operate. I recall seeing in the standards that the drive needs to negotiate with 1x, 2x, or 4x lanes. I've even found specific cases noted where drives on specific systems are paired up and share a 4x connection with 2 lanes going to each drive in the pair so I don't expect trouble from the drive side.
Now If it were technically possible I expect that I would need to design some rather odd bifurcation PCBs with a CLKREF fanout chip onboard. I assume the harder trick would be the BIOS support. Looking around I see that something like the Supermicro X11DPX-T would probably fit the bill as it has a gratuitous quantity of lanes providing both CPUs are installed. Does anyone have any insight on this or other avenues to explore?
Is there some specific technical reason you cant bifurcate a 16x or 8x or 4x slot into a large number of 1x lane groups?
For example the application I am looking into could far more benefit from 16 individual 1x lanes from a 16x slot than it would 4 4x lanes. The same holds true for 8x and 4x slots with the expectation that only 8 or 4 1x lanes would be available respectively. For clarification the project relates to accessing NVME drives in large number. Throughput is not the issue at hand and I understand that an NVME drive with a single 1x lane will run at 25% the rated speed. Nowhere in the PCI Express Card Electromechanical, PCI Express miniCard Electromechanical, or Serial ATA Specifications is there a requirement that NVME drives make use of only 4x lanes to operate. I recall seeing in the standards that the drive needs to negotiate with 1x, 2x, or 4x lanes. I've even found specific cases noted where drives on specific systems are paired up and share a 4x connection with 2 lanes going to each drive in the pair so I don't expect trouble from the drive side.
Now If it were technically possible I expect that I would need to design some rather odd bifurcation PCBs with a CLKREF fanout chip onboard. I assume the harder trick would be the BIOS support. Looking around I see that something like the Supermicro X11DPX-T would probably fit the bill as it has a gratuitous quantity of lanes providing both CPUs are installed. Does anyone have any insight on this or other avenues to explore?