not getting 10Gb/s and totally perplexed.

Joined
Aug 28, 2021
Messages
609
I'm trying to setup a 10Gb/s network between my main NAS, my Windows Emby/Minecraft server and then a test NAS.

NAS is a 4790k with a 10gbe card and the Windows box is a E5-2699 v3 on a Gigabyte X99-UD5 with a 10gbe card. The test NAS is an old FX-8320..nothing special. These boxes are sitting side by side and are connected directly into the switch. Most of my network cards are ASUS XG-C100C, which is a PCIe Gen 3 x4 card...I also have one card that is PCIe Gen2 x8 for the old stuff that only runs at Gen 2.

When I run iperf between the two NAS boxes, I get 9 Gb/s, which seems reasonable.

However, when I run Iperf to the Windows box it seems to be plagued with gremlins...I get more like 3 Gb/s.

So then swapped network ports on the switch and cables cables...3 Gb/s. So how about using a different PCIe slot, right? The X99 board has two x16 and one x8. Nope, 3 Gb/s. Same with a different card. Then I tried safe mode with networking and I got up to 5 Gb/s. Then I tried the boot drive from my testnas and was up like 9 Gb/s.

So, what gives with windows and truenas not playing???
 
Which model cards do you have and exactly how do you have them wired? (switch, dac, etc).
 
OK, If I'm reading this right, you swapped a bunch of stuff around, but basically E5-2699 v3 on Gigabyte X99-UD5 with Windows does 3gbps in normal mode and 5 gbps in safe mode, doesn't much matter which NIC or cable etc; but you get 9gbps if you boot that board with Linux?

If that's the case, double check drivers, but there's probably some sort of tuning issue. Easiest thing would be to try running iperf with its parallel mode, or just run multiple iperfs. You might try cpu pinning iperf. I've never done high performance network tuning for Windows, but I know it's a thing*; try fiddling with the options for receive side scaling, and number/size of tx/rx queues in the driver properties. I don't know if there's an easy way to setup which core processes interrupts for the nic in windows, but you'll get better perf for an application that's network i/o bound with near zero application cpu use when the core that's running the application is also doing the rx and tx interrupts for the nic; if Windows is trying to be smart and balance the load onto multiple cores, it backfires here because now there's a lot more cross-core communication which is super slow.

* Actually, Windows pioneered Receive Side Scaling, which is amazing for high performance networking; so I have a lot of respect, but I'll do my hyperscaling on FreeBSD please, or Linux if I have to, before considering Windows.
 
Have you tried testing with tools other than Iperf? The Wikipedia page on it says the Windows support is unofficial and hasn't been maintained since 2016.

Maybe just try copying a file?

I get a little over 8.5Gb in either direction just using Windows file copy between a Xeon E5-2687Wv2 running Windows and an i3-10100 running RedHat 8 & Samba. Or at least I do for the first few GB receiving on the Windows box or sending a second time. That old machine only has SATA SSDs, so if the file isn't cached in ram on a send or the cache fills up on a receive it slows down to a bit over half of that since it's going at SATA SSD speed.

Also, I'm using NVidia/Mellanox ConnectX-4 cards. So server NICs. That might make a difference.
 
OK, If I'm reading this right, you swapped a bunch of stuff around, but basically E5-2699 v3 on Gigabyte X99-UD5 with Windows does 3gbps in normal mode and 5 gbps in safe mode, doesn't much matter which NIC or cable etc; but you get 9gbps if you boot that board with Linux?

If that's the case, double check drivers, but there's probably some sort of tuning issue. Easiest thing would be to try running iperf with its parallel mode, or just run multiple iperfs. You might try cpu pinning iperf. I've never done high performance network tuning for Windows, but I know it's a thing*; try fiddling with the options for receive side scaling, and number/size of tx/rx queues in the driver properties. I don't know if there's an easy way to setup which core processes interrupts for the nic in windows, but you'll get better perf for an application that's network i/o bound with near zero application cpu use when the core that's running the application is also doing the rx and tx interrupts for the nic; if Windows is trying to be smart and balance the load onto multiple cores, it backfires here because now there's a lot more cross-core communication which is super slow.

* Actually, Windows pioneered Receive Side Scaling, which is amazing for high performance networking; so I have a lot of respect, but I'll do my hyperscaling on FreeBSD please, or Linux if I have to, before considering Windows.
I did iperf with -P 8 and get 4.5ish send and 4.5ish receive. I'm thinking that adds up to 9ish...so it is how windows handles stuff.
 
So you are using a 10 GbE switch ? The NAS boxes and the windows box are all connected directly to the 10 GbE switch? Are your cables good?
 
I did iperf with -P 8 and get 4.5ish send and 4.5ish receive. I'm thinking that adds up to 9ish...so it is how windows handles stuff.

You should really be able to get 9ish in both directions at the same time, but it kind of is what it is. Maybe see if you can tell iperf to use bigger socket buffers? How does large file copy look?

iperf is fun, but you probably didn't put together a 10G network to mess around and take benchmarks (although, I have to be honest that that's 95% of why I got 10G cards at home, but also I didn't have a better way to see if I could run 10G between my network closets than getting a pair of cards for my servers)
 
So you are using a 10 GbE switch ? The NAS boxes and the windows box are all connected directly to the 10 GbE switch? Are your cables good?

Straight in with 6ft brand new cables that I actually bought instead of making myself.

You should really be able to get 9ish in both directions at the same time, but it kind of is what it is. Maybe see if you can tell iperf to use bigger socket buffers? How does large file copy look?

iperf is fun, but you probably didn't put together a 10G network to mess around and take benchmarks (although, I have to be honest that that's 95% of why I got 10G cards at home, but also I didn't have a better way to see if I could run 10G between my network closets than getting a pair of cards for my servers)

It is getting even more frustrating for me this morning with the more I read and test.

On my truenas, I store my Emby library. It is housed on 4x20TB Seagate Exos drives in a Raid-Z1 (raid5) So, I decided to move a big file around...Fellowship of the Ring at 4k at 144.8GB should do nicely. So I copy FOTR from the HDDs to my NVME in the windows box. It took 26:31, or 93.2 MB/s.

Then, I transfer it back and I got 424.9 MB/s, granted I got some RAM cache going on with my 32GB total, buuut still.

So I turn on the Jumbo stuff for windows and no different.

Next, I tried regular Iperf3 again...and still 3Gb/s, so then I tried the -R flag to have the server and it is more like 1.5 Mb/s. I am tempted to try my 5950X machine with a high single core rating. It gets 5Gb/s on Iperf over the old copper. But to move it...that's a PITA.

ETA: changed MTU on truenas to 9014 and now I am seeing higher results in iperf.
 
Last edited:
I wonder what speeds you will get if you connect your windows box directly to the NAS without the switch?
 
You dont have a virus.
You dont have a technical problem.
You dont have a cabling issue.
You have a case of not enough threads, too small of a TCP windows etc... during the iperf test try this command:

iperf -c x.x.x.x -i1 -t 10 -m -p 7

-c means client mode

-i1 means report every 1 second the speed

-t 10 run for 10 seconds

-m is mpbs or you can use g for gbps

-p parallel threads that are transferring data at the same time

You are only using a single data transfer thread with the default commands so you need to bump up the threads, not cpu threads, but parallel transfer threads.


replace X's with the obvious server address that youre using

im just an old school network engineer. Iperf is one of the BEST test methods even today especially over big ass high speed fiber pipelines.
 
Last edited:
You dont have a virus.
You dont have a technical problem.
You dont have a cabling issue.
You have a case of not enough threads, too small of a TCP windows etc... during the iperf test try this command:

iperf -c x.x.x.x -i1 -t 10 -m -p 7

-c means client mode

-i1 means report every 1 second the speed

-t 10 run for 10 seconds

-m is mpbs or you can use g for gbps

-p parallel threads that are transferring data at the same time

You are only using a single data transfer thread with the default commands so you need to bump up the threads, not cpu threads, but parallel transfer threads.


replace X's with the obvious server address that youre using

im just an old school network engineer. Iperf is one of the BEST test methods even today especially over big ass high speed fiber pipelines.
I came to that conclusion as well. However, I also determined I also needed to change MTU to 9000.

It is an interesting exercise...the older CPUs needed a -P 4 to saturate it. Not even a zen3 could saturate it...only gave about 55%, so a -P 2 was needed for them.

This of course makes me wonder if on an actual file transfer is the CPU still only using a single thread? If so, how kneecapped is my old xeon compared to a 13th gen intel?

My next oddity is when I have a pool of a single SSD and I try to write to it, I am capped at 220 MB/s. That is like a third of 6Gb/s. My 4xHDD pool kicks the crap out of it. I should get ambitious and put the OS on the SSD and try the NVME for transfers and see what happens.
 
You don't need jumbo frames for 10Gbps. I can routinely saturate my 40Gbps setup with iperf3, without jumbo frames. You have something else going on. I've only skimmed the thread but I don't see where you say what NIC or switch you're using other than "most of..." which isn't helpful. Specifically which NICs, with what driver, and what switch? What's your VLAN setup? What does your switchport show - is it clean or are error accumulating? Have you tried different cables and switchports? Have you tried connecting directly and skipping the switch?

You don't "add" 4.5 + 4.5 and get 9, and say you're operating at nearly 10Gbps. It's duplex, so you should get 10 in each direction simultaneously.
 
Windows file transfers are single threaded. Your old Xeon won't be much of a handicap there unless you have a lousy NIC. A good NIC will offload as much work as possibly from the CPU. A cheap one will make the CPU do a bunch of work. So I'm really wondering what NICs you're using since my old Xeon is whipping yours. My E5-2687Wv2 has a higher max turbo clock -- 4.0 vs. 3.6 -- but it's an older architecture than your E5-2699v3 so my old Xeon should be less than 10% faster on a single thread load, and might even be slower.

I played around with iperf3 and got about 9.1Gb with the Windows Xeon E5-2687Wv2 sending and 9.5Gb with the Linux i3-10100 sending over TCP. I can't seem to get that high with UDP, which means iperf3 is either busted or hard to set up right. It seems that the -l parameter on the client makes a big difference for tcp. I've been using -l 1M, so iperf3 uses a 1MB send buffer (I think). No -P flag, so running single threaded. Without -l I get a little over 7Gb.

You have something else going on other than an old CPU, and I'm really wondering what NICs, switches, etc. you're using. Especially NICs. Mine are all pretty top tier from a few years back. NVidia/Mellanox ConnectX-4 in these two machines. But do try that -l flag and see what happens. iperf3 seems to need a fair bit of tuning. Defaults are slower than a windows (SMB) file copy. I still haven't figured out how to get decent speed out of UDP, and UDP ought to be faster than TCP.
 
It definitely seems to be thread limitations with iperf.

When I got with a -P 2 I can pretty much saturate. The other thing I have found is I need jumbos.

Most of my cards are this one from Asus:
https://www.amazon.com/dp/B072N84DG6

Then because the old FX system only goes to PCIe Gen2, I got this XZSNET one because it is a x8.

https://www.amazon.com/dp/B0BJKMBQWY

Since then, I figured out how to configure emby and to use nvidia for transcodes on the test box, so I rolled it to my main box. That meant I had reconfigure my main NAS to include my 1080ti for emby transcodes. To get the 1080ti to actually fit, I could not get it to fit in PCIe #1 and clear the hard drives. Instead, I had to put it down in #2 and put either the NIC or sata expansion card in #1. For some reason, the asus card won't work if the GPU isn't in #1. Sooo, the x8 XZSNET is now in my main nas. In case anyone wants to know, a 10Gbps PCIe x4 Gen3 nic running at Gen2 gives about 6.2 Gbps.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I use jumbos and that seems to help, but to saturate I still need at least two threads.

For me, it works...my limiting factor are the spinners in my nas.
gotcha.. i am all nvme or ssd....I ran mine in linux to linux iperf and everything works...
so one would say windows.. but windows 10, 11 and server 2022.... no matter how i mix them, all show the slowness...

linux however..
1710516274413.png
 
ok got it now...

turns out it is the Intel x520 drivers... what a pita and such a shame that I bought 4 cards I am not going to use!!!

I purchased 2x Mellanox ConnectX-3 EN cards and BOOM... we are good to go..
just ordered 3 more, so now I can let yall know....

Windows - ConnectX-3
Everything else - Intel (even esxi 8 installed and sees 10gb on Intel)

MellanoxConnectX-3.png



ConnectX-Copy.png
 
ok got it now...

turns out it is the Intel x520 drivers... what a pita and such a shame that I bought 4 cards I am not going to use!!!

I purchased 2x Mellanox ConnectX-3 EN cards and BOOM... we are good to go..
just ordered 3 more, so now I can let yall know....

Windows - ConnectX-3
Everything else - Intel (even esxi 8 installed and sees 10gb on Intel)

View attachment 642375


View attachment 642373
That is interesting...

Seeing as i've had trouble as well...

In machine #1 what was the OS and what was the NIC
In machine #2 what was the OS and what was the NIC

In both machines now, are they both replaced with the mellanox cards?
 
That is interesting...

Seeing as i've had trouble as well...

In machine #1 what was the OS and what was the NIC
In machine #2 what was the OS and what was the NIC

In both machines now, are they both replaced with the mellanox cards?
so in Machine 1 and 2 WAS...
Intel X520-DA2 and Windows Server 2022
in PC 3
Intel X520-DA2 and Windows 10
in PC 4
Intel X520-DA2 and Windows 11

now PC 1 and 2... rebooted on thumb drives and put Parted Magic in and got 9.4gb (screenshot up there).
So Only OS/drivers different.. all hardware the same.


Now to answer the NOW....
I put Mellanox in PC3 Win10 (Plex) and also put in PC5 (LOL... Plex storage that I have not tried the Intel x520-da2 in yet)
and that is what I am getting full 9.2gb...

so Machine 1 was rebuilt with ESXi and sees 10gb - havent tested performance but issue i had with connectx1 and 2 was esxi didnt support them at all.. didnt see them.

Machine 2 and 4 will get mellanox connectx-3 when they come in and have a spare on hand.
This also gives me 4 extra intel cards that I am not too happy with but might have to fire up proxmox, truenas and cpx (think that is what it was)....

I can do max 8x10gb on this switch so i got room to play.

esxi 8
1710803105251.png
 
and in esxi, windows 11 with intel nic... back to my plex box with mellanox, isnt great, better than the 2.5gb I was getting with these intel NICs..
1710839763740.png



Wonder if I can install intel drivers instead of the VMTools drivers for the NIC...
 
and in esxi, windows 11 with intel nic... back to my plex box with mellanox, isnt great, better than the 2.5gb I was getting with these intel NICs..
Just for clarity, since it's a bit hard to follow, were you using these NICs always in a VM or was that a later change?
 
Just for clarity, since it's a bit hard to follow, were you using these NICs always in a VM or was that a later change?

yeah my brain goes all over the place.

Here it is explained a bit... I have old Mellanox ConnectX1 cards and one ConnectX-2 card. They have a proprietary port called CX4. Cables are big and bulky but they are 10gb and I got 10gb in Windows VMs. ESXi 6.7 and FreeNAS 11 supported those cards.

Now, ESXi 8, Proxmox, TrueNAS do not support them and Windows has started to give me gripes of speed issues ( I think it was due to jumbo frames though) so I decided to move up in times and go to a newer style.

I have a home lab that I like to play around with. ESXi, Proxmox, truenas, windows hyper-v and more.... With that, I wanted to cleanup my utility room in the basement as it is a horrendous nightmare to look at.

Decided to go Intel - as why not - but struggling to get faster than 2.5gb on hardware systems and not VMs... the VM test this morning was higher than 2.5gb and i can live with that.
Mellanox ConnectX-3 cards got me to 10gb speeds on physical hardware.

hope that pulls in any confusions...
 
I'm actually not all that surprised. Unlike a lot of server NICs Mellanox/NVidia ConnectX cards actually have official support for client Windows. I have a few ConnectX-4 cards, and one of those will pretty much hit 10Gb in an old ass Ivy Bridge Xeon E5-2687W machine (used eBay CPU upgrade from an i7-3820) running Win10. Those Intel Intel X520-DA2 NICs (released 2009) are also quite a bit older than the ConnectX3 (2013) and ConnectX4 (2014) cards. ConnectX4 also had quite a long life. I'm not sure if they still make them since NV's site only lists ConnectX6 & 7, but it's not hard to find new in box ConnectX4 cards. fs.com has them in stock.
 
I'm actually not all that surprised. Unlike a lot of server NICs Mellanox/NVidia ConnectX cards actually have official support for client Windows. I have a few ConnectX-4 cards, and one of those will pretty much hit 10Gb in an old ass Ivy Bridge Xeon E5-2687W machine (used eBay CPU upgrade from an i7-3820) running Win10. Those Intel Intel X520-DA2 NICs (released 2009) are also quite a bit older than the ConnectX3 (2013) and ConnectX4 (2014) cards. ConnectX4 also had quite a long life. I'm not sure if they still make them since NV's site only lists ConnectX6 & 7, but it's not hard to find new in box ConnectX4 cards. fs.com has them in stock.
i should of looked at how old these cards all were...
I am fine keeping the intel for esxi, proxmox and all that and mellanox for windows... I have a thread on Intel forums so finally some traction and they say they are going to look into it.. will see...
 
got the mellanox in.. still getting 9.4gb server 2022 to 2022.. but getting 5gb from win11 to the same server 2022..

1711389282172.png
 
Check this reply by Lester, has a few things you can try:

https://answers.microsoft.com/en-us...t-speeds/bd9d0d11-c11a-48bc-9b4c-a15f952551b1
First what we will do is enable WWAN and WLAN services, these services are essential to run the Wireless and wired connection perfectly.

- Open Services (Press Windows key + R then type in services.msc then click OK)

-Look for WLAN Autoconfig and WWAN Autoconfig> Right-Click Properties and set it to automatic (If it's already set to automatic, right-click then click stop then start it again)

-Restart the PC and check


If the issue persists, run the following command in Command Prompt (Admin). Follow the steps below to do so.

These sets of commands will reset the internet connection and re-calibrate the internet settings you have.

Press Windows Key + X.
Click on Command prompt (Admin).
Type the following commands, and hit Enter after each command:

netsh int tcp set heuristics disabled
netsh int tcp set global autotuninglevel=disabled
netsh int tcp set global rss=enabled
netsh winsock reset
netsh int ip reset
ipconfig /release
ipconfig /renew
ipconfig /flushdns
 
got the mellanox in.. still getting 9.4gb server 2022 to 2022.. but getting 5gb from win11 to the same server 2022..

View attachment 643691
I've been meaning to test client to client and then client to truenas as more data points. and seeing as i have two types of cards, compare them. But what you are showing seems to be what I remember...to get (near) full throughput I'd have to use at least two threads on the win10 machine. If I don't use jumbos, it is even worse.
 
intel card wouldnt go over 2.5gb.. so happier...
I've been meaning to test client to client and then client to truenas as more data points. and seeing as i have two types of cards, compare them. But what you are showing seems to be what I remember...to get (near) full throughput I'd have to use at least two threads on the win10 machine. If I don't use jumbos, it is even worse.

however, i think i got it...

I have an MSI z690-A DDR4 board. It states it has 3x pcie x16 slots.. so i have video card closest to CPU.... then i did farthest.. i just moved it closer to GPU and BOOM...

1711392560037.png
 
intel card wouldnt go over 2.5gb.. so happier...


however, i think i got it...

I have an MSI z690-A DDR4 board. It states it has 3x pcie x16 slots.. so i have video card closest to CPU.... then i did farthest.. i just moved it closer to GPU and BOOM...

View attachment 643702
so without looking at the manual for your mobo, me thinks by moving it there you are now using the pice lanes that are going direct to the cpu and not the chipset. As a result, me thinks your gpu would now be running at x8 instead of x16. GPU-z would certainly tell you.

Now then, IF I am correct, then why are chipset pcie slots so kneecapped for this operation?
 
so without looking at the manual for your mobo, me thinks by moving it there you are now using the pice lanes that are going direct to the cpu and not the chipset. As a result, me thinks your gpu would now be running at x8 instead of x16. GPU-z would certainly tell you.

Now then, IF I am correct, then why are chipset pcie slots so kneecapped for this operation?
2 x PCI Express x16 slots, running at x8 (PCIE_3, PCIE_4)
* The PCIE_4 slot shares bandwidth with the PCIE_1 slot. When the PCIE_4 slot is populated, the PCIE_1 slot will operate at up to x8 mode.
* When an 28 lane CPU is installed, the PCIE_2 slot operates at up to x8 mode and the PCIE_3 operates at up to x4 mode.
(All PCI Express x16 slots conform to PCI Express 3.0 standard.)

3 x PCI Express x1 slots
(The PCI Express x1 slots conform to PCI Express 2.0 standard.)
Oof! PCIe 2.0 x1. No wonder.

Edit: oh nvm, it's 3 8x and 3 1x, but two of the 8x slots are actually on a switch, and one shares bandwith (so it could be 16x)

That board layout is wild.

Screenshot_2024-03-25-14-10-25-65_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
 
Last edited:
Oof! PCIe 2.0 x1. No wonder.

Edit: oh nvm, it's 3 8x and 3 1x, but two of the 8x slots are actually on a switch, and one shares bandwith (so it could be 16x)

That board layout is wild.

View attachment 643704
that is very thorough!!!!! LOL...
thx for that info!

I am gaming now so will see if gpu is smashed.. if so, i will move it back and rather have 5gb and not muck with my fps in mw3...

looks like it is slammin my card.. lol 30fps!!!!
 
Oof! PCIe 2.0 x1. No wonder.

Edit: oh nvm, it's 3 8x and 3 1x, but two of the 8x slots are actually on a switch, and one shares bandwith (so it could be 16x)

That board layout is wild.

View attachment 643704
mine is a x99...it is an old HEDT. I think the xeon i have something like 40 lanes....I dunno. I'm not lane challenged. Heck, I put my 1080ti in the bottom and it still runs at x16.

TeleFragger has a Z690. Not reading the manual because I'm lazy and it cant be much different than my x570 as lanes are lanes...but the full length slots there are three based on the description...

So youd have:

Bottom slot runs at x4 from chipset

middle slot runs at x8 from cpu

top slot runs at x8 from cpu if middle slot is populated. runs at x16 from cpu if middle slot isnt populated.

Then from cpu there are another four lanes that feed the chipset and four more lanes for the first NVME. The chipset then has a bunch of lanes for sata more nvme, usb....but has to pump everything back through those four lanes, right?

So taking my x570 chipset for example...those four chipset to cpu lanes I think are gen4, which gives it a theoretical bridge of 64Gb/s. Really, that is a lot of bandwidth. So then the question is, how does it work? Let's say I had two gen3 nvmes plugged in. In theory, each nvme should be at 32Gb/s. Then I go to access both nvmes at the same time. So does the chipset translate the gen3x4 to gen4x2 and we 32+32=64, or does the chipset fall back to gen3 because the devices are gen3 and we just get 32/2+32/2=32?

But still, let's say the arrangement is fairly standard with the GPU, a nvme plugged into the direct to cpu slot then the chipset with the audio and a usb port with the keyboard and mouse and then a 10Gb/s. The 10Gb/s nic isnt even a quarter of bandwidth, so why the kneecapping?



Sorry for the tangent rant question.....
 
So, top to bottom, the x16 slots are 1, 4, 2, 3.
1 & 2 are 16 lanes, 3 & 4 are 8 lanes.
4 shares bandwidth with 1, so 1 is 8x when 4 is populated.
2 is always 16 lanes.

So, ideally you'd want to use slots 1 and 2, or the first and third 16x slots, for 16 lanes to both cards. Any other config brings at least one down to 8x.
 
None are run off the chipset, fwiw. They all go straight to the cpu. The 1x slots, onboard lan, mpcie, and other stuff is on the chipset.

Oh, I mixed up the OP. Hold up a minute.
 
Yeah, so for the MSI z690-A, the 16x slots 1, 3, and 4 run at 16x, 4x, and 1x respectively.

Looks like 3 and 4 run off the chipset? That's what it says but there's no diagram to confirm exact routing. Shouldn't hurt to use the 4x slot, only reason I would expect a difference is either increased heat getting into the GPU from the lan card.
 
So, top to bottom, the x16 slots are 1, 4, 2, 3.
1 & 2 are 16 lanes, 3 & 4 are 8 lanes.
4 shares bandwidth with 1, so 1 is 8x when 4 is populated.
2 is always 16 lanes.

So, ideally you'd want to use slots 1 and 2, or the first and third 16x slots, for 16 lanes to both cards. Any other config brings at least one down to 8x.
i have to read that after no beer.. .lol
i only have 2 cards in... rtx 2070 - gonna upgrade to 4070 ti soon... and this mellanox card... intel in same slot got 2.5g
 
Yeah, so for the MSI z690-A, the 16x slots 1, 3, and 4 run at 16x, 4x, and 1x respectively.

Looks like 3 and 4 run off the chipset? That's what it says but there's no diagram to confirm exact routing. Shouldn't hurt to use the 4x slot, only reason I would expect a difference is either increased heat getting into the GPU from the lan card.
well that explains why telefragger was getting kneecapped on the bottom slot. While mechanically a x16 it was only electrically x1. Can't say I've seen many x16 with only x1 electrically on many spec sheets. My understanding is it would be more common for an open slot or something. I don't know....

But yeah, that mellanox card says it runs gen3 x8 and it seems like a lot have two ports, which when you work out the numbers per port would be 32 Gb/s per port, which sure seems like a lot left on the table.

Going further into the weeds, cards based on the AQC113 look interesting. They say they can run at Gen4x2 or even x1 if willing to take a little hit.

https://www.marvell.com/content/dam...cs-aqc114cs-aqc115c-aqc116c-product-brief.pdf
 
well that explains why telefragger was getting kneecapped on the bottom slot. While mechanically a x16 it was only electrically x1. Can't say I've seen many x16 with only x1 electrically on many spec sheets. My understanding is it would be more common for an open slot or something. I don't know....

But yeah, that mellanox card says it runs gen3 x8 and it seems like a lot have two ports, which when you work out the numbers per port would be 32 Gb/s per port, which sure seems like a lot left on the table.

Going further into the weeds, cards based on the AQC113 look interesting. They say they can run at Gen4x2 or even x1 if willing to take a little hit.

https://www.marvell.com/content/dam...cs-aqc114cs-aqc115c-aqc116c-product-brief.pdf

yeah intel cards were dual port... mellanox single..
 

Attachments

  • Screenshot_20240325_174010_eBay.jpg
    Screenshot_20240325_174010_eBay.jpg
    274.1 KB · Views: 0
Back
Top