Installing 2 M2 SSD's on a z490 motherboard

Joined
Apr 1, 2008
Messages
2,781
I'm currently using a z490 motherboard with an i7 10700k and have a 512gb M2 SSD installed, thinking about getting a 4TB M2 SSD from PCCG for storage to replace my traditional hard drive which I have been using for years now https://www.pccasegear.com/products/51139/adata-xpg-s40g-rgb-m-2-nvme-ssd-4tb

I heard the z490 chipset only supports 20 PCI-E lanes and I have a RTX 2080 Ti. I'm not familiar with PCI-E lanes and how they work, if I have 2 M2 SSD's occupied along with a dedicated PCI-E GPU, will it completely use up the PCI-E lanes and will I see a performance decrease due to limited PCI-E lanes?
 
Z490 supports 16 CPU lanes to your GPU (or you can split x8/x8 for dual GPU).. It then has a DMI 3.0 link to the PCH (this is basically a pcie 3.0 x4 link). So anything plugged in besides those two pcie slots are going to be sharing a single pcie x4 link equivalent. If your reading/writing to two NVME at the same time, their bandwidth is effectively x2. This is also shared with things like your network and some USB, SATA. The more you're using, the slower it will run. Welcome to the land of compromise ;). This is another reason some people prefer AMD for things other than gaming, but I'll leave it at that as it's a bit off topic as you already have what you have.

Summary: Your GPU by itself is using up all pcie directly to your CPU regardless of having an NVME installed. The reason you can run more things is due to multiplexing/sharing in the PCH, the more things share the less performance they may have.
 
Just a bit more info. The DMI 3.0 link is able to support about 4GB/s of transfer. So if you are running two NVME M.2 cards each one individually could probably hit it's maximum rate (a lot can get up to 3,500MB/s now) but if you have 2 of them and transfer from one to the other, you will see at most 2,000MB/s (half each, give or take for overhead) or if you set them up in RAID you will only ever get 4,000MB/s total. This being said, you will not in any way notice this in normal operation especially if your data tends to load from one or the other and not both. Also, keep in mind if you use the second NVME slow you will also lose some SATA ports so make sure if you are using any of those you have them plugged in the right spots to support dual NVME operation.
 
Alright, thanks for the reply. Will look into it, as I feel that an M2 SSD will suit my needs more due to it having more performance. Don't really care for SATA anyway because I'm only using one SATA connection anyway.
 
Z490 supports 16 CPU lanes to your GPU (or you can split x8/x8 for dual GPU).. It then has a DMI 3.0 link to the PCH (this is basically a pcie 3.0 x4 link). So anything plugged in besides those two pcie slots are going to be sharing a single pcie x4 link equivalent. If your reading/writing to two NVME at the same time, their bandwidth is effectively x2. This is also shared with things like your network and some USB, SATA. The more you're using, the slower it will run. Welcome to the land of compromise ;). This is another reason some people prefer AMD for things other than gaming, but I'll leave it at that as it's a bit off topic as you already have what you have.

Summary: Your GPU by itself is using up all pcie directly to your CPU regardless of having an NVME installed. The reason you can run more things is due to multiplexing/sharing in the PCH, the more things share the less performance they may have.

This is all correct. However, in reality, the DMI 3.0 link is rarely a bottleneck outside of drive benchmarking on an NVMe RAID0 array. That's just about the only time you routinely see bandwidth limitations come into play. While you'd think AMD would be better on this front, the reality is that when comparing the same drive on both platforms, they perform the same. The only edge AMD truly has on X570 is in regard to it's PCIe 4.0 support.
 
This is all correct. However, in reality, the DMI 3.0 link is rarely a bottleneck outside of drive benchmarking on an NVMe RAID0 array. That's just about the only time you routinely see bandwidth limitations come into play. While you'd think AMD would be better on this front, the reality is that when comparing the same drive on both platforms, they perform the same. The only edge AMD truly has on X570 is in regard to it's PCIe 4.0 support.
Correct, almost unnoticeable to most folks. The only instances when you'd even notice would be either very sequential loads RAID or non raid when transferring large sequential files between 2 NVME drives. But to be honest, you can still transfer near 2GB/s, so it's not as if your transfer is going to be slow and most of the times it's a lot of little files and you will be limited by drive speed and IOPS more than the link speed. DMI 3.0 is more than adequate for most, it's those who are on the border of HEDT that this can affect or those trying to score high on benchmarks and stuff that would notice. PCIe 4.0 isn't the only edge, it's the extra 4 cpu lanes for the first NVME, meaning you can run 2 NVME's at pretty much full speed without slow down (even if they are both 3.0). This is very niche, but hey, it's an enthusiast site, so it's good to know about especially when someone is asking about installing 2 NVME's and asking about how the PCIe lane distribution works.

Anyways, this is why I did a quick follow up to point out that in practice it's still very fast and not something the average user would notice just to make sure it was understood that it most likely won't affect someone.
 
I had the same question recently, and it was answered on Reddit with this...

I'm interested in the 10700k, but I'm put off by the extreme lack of PCIe lane's
For mainstream use, I'd be astonished if you managed to actually have a workload that saturates the DMI. As far as I can tell, there is more than enough PCIe lanes for all your uses, and then some.
"So lets say I'm playing RDR2, taxing the GPU PCIe 16x bandwidth"
You aren't saturating all 16, probably not even half, despite all 16 being allocated to the GPU
"and the NVMe drive is on the Z490 chipset PCIe lanes, and I'm copying a massive file"
Well, you'd need to be copying to or from another drive, likely also on the chipset, so you're probably not saturating much of the DMI at all.
"I have a torrents saturating my network bandwidth in the background."
Again, not really saturating DMI much, network is on chipset, so is the drive it's being downloaded to.


And the other answer I received...

It's not possible for the chipsets to directly write into the drive. It has to go through the CPU. Practically everything must go through the CPU. The chipset is just an handler for the IO and PCI lanes (and some other stuff).

The CPU has to send instructions and communicate with the drive through the chipset, to oversimplify it. Yeah, after that the chipset can do everything by itself, assuming that part of the data isn't stored in the RAM (really unlikely): in that case, the CPU has to integrate that part.

To be 100% clear: there's always a data exchange through the DMI. That said, userremoved is rigtht, not all of the data needs to pass through the CPU.

uTorrent is a particular case: the fact that a file has to be "put together" prevent the chipset to directly writing it into the drive, without a caching/proxie before (not sure which of the 2).

It's pretty rare for a file to be entirely onto one drive and not also being partially stored/paged into the RAM. In that case, the CPU has to work too, since the only access to the memory on DMI 3.0 is through the memory controller of the CPU.

Theoretically a file can be directly written while being downloaded, but the windows security processes will be scanning it. I'm not completely sure if this, since it was the old method. Don't know if there's a file ibernation and check after the download is completed.

For the rest you're 100% right. You won't fill the DMI that fast. At least on a Z490.


Hope this helps, it gave me a better insight.
 
Last edited:
Back
Top