Any point to using 2 video cards nowadays?

ng4ever

2[H]4U
Joined
Feb 18, 2016
Messages
3,585
Basically mean my onboard graphics card on my i7 12700k and my dedicated video card ?

Any advantage? Like if you run Emby or Plex on your pc too just for a little while mostly.

I hear mixed things.
 
My friend runs 6 displays on his PC, all of which are at least 1080p. He's using a Radeon RX 6900 XT, with a Radeon RX 570 as a secondary card to handle some of the extra monitors. I don't recall the specifics, but he said things didn't work out too well when trying to drive all 6 displays with one card (and one of those displays is a 4K 144Hz monitor too). So uuhh, I guess if you intend on using a shitload of monitors you might need more than one GPU. Although I coulda sworn Radeon GPUs were usually fine driving up to 6 displays in the past (like back during the Eyefinity days).

My friend's setup:

View: https://i.imgur.com/AlX5rGn.jpg
 
One thing that you can run in to with a mixed video card situation is issues or even app crashes when moving something across monitors that span the cards. We've seen it at work occasionally when someone has a discrete GPU and uses the integrated one for more screens. For most shit it works fine but even something like video playback can cause issues sometimes.

Not saying it can't work, but I'd question the utility, particularly since it isn't hard to find dGPUs that'll do 4 monitors (and they make ones that'll do more).
 
I have heard of using it for encoding and streaming and game playing setups.
 
If you develop GPU compute software you need to test your mechanism to use multiple cards.
 
How do you do that nowadays on today's motherboards? The PCIe slots - usually only dedicate to one gpu and if there's an 'extra' PCIe slot - it's 'crippled' or limits the lanes you get or something? That doesn't matter for Compute/productivity I guess, though? But, usually, you're lucky if you can use a 2nd card let alone more than that? I guess some expansion methods might allow for more?
I guess I am out of that loop. :-{
 
My board lists this:
  • 1 PCIe 5.0 x16, 1 PCIe 4.0 x16, 1 PCIe 3.0 x16

It has none of the x4 or x1 slots - just 3 full-size slots, but obviously down-clocked(?) to older generations of PCIe. And, most GPUs are gonna be fine in a slightly older slot, as far as I've seen.

I keep my iGPU enabled just so I can plug it into my TV to watch movie/shows from my PC. Probably unnecessary.

-bZj
 
How do you do that nowadays on today's motherboards? The PCIe slots - usually only dedicate to one gpu and if there's an 'extra' PCIe slot - it's 'crippled' or limits the lanes you get or something? That doesn't matter for Compute/productivity I guess, though? But, usually, you're lucky if you can use a 2nd card let alone more than that? I guess some expansion methods might allow for more?
I guess I am out of that loop. :-{
HEDT, my TRX40 motherboard has two 16x and two 8x bandwidth 16x slots all Gen 4. Gives a lot of options. Double up two 3090’s and bridge them and you get 48Gb of Vram for like 3d rendering using the GPU.

Whenever the newer HDET boards come out from Intel and AMD, I am sure they may even be better with like Gen 5.
 
Whenever the newer HDET boards come out from Intel and AMD, I am sure they may even be better with like Gen 5.
I'm still pissed that Threadripper never saw anything past Zen 2 (except for Threadripper Pro). And what was Intel's last HEDT platform, X299? I don't know why HEDT disappeared, but it's dang annoying. I really hope both companies return to HEDT. I'm pretty sure one of the things that killed off HEDT was that mainstream platforms had CPUs going all the way up to 16 cores. HEDT still offered other shit besides high-core-count CPUs back when they were around. Plentiful PCIe lanes and higher-than-dual-channel RAM, just to name a couple things.
 
Basically mean my onboard graphics card on my i7 12700k and my dedicated video card ?

Any advantage? Like if you run Emby or Plex on your pc too just for a little while mostly.

I hear mixed things.
The advantage to having both comes into play with certain video-centric software that takes advantage of both GPUs for different tasks (such as video editing software that decodes H.264 and/or HEVC and/or AV1 via QuickSync while also rendering effects via the discrete GPU).
 
I use the main video card for gaming and the secondary cards handles other tasks for the secondary monitors like webcam decoding and watching videos. I found dedicating the main card to the main video monitor eliminated most issues in games when switching between apps and just in general behavior. There were always issues running everything off one card.
 
Same. I had issues running multiple monitors off the dgpu. I reduced to two 4k monitors and run one off the dgpu and the other off the igpu. No issues with games since.
 
I'm still pissed that Threadripper never saw anything past Zen 2 (except for Threadripper Pro). And what was Intel's last HEDT platform, X299? I don't know why HEDT disappeared, but it's dang annoying. I really hope both companies return to HEDT. I'm pretty sure one of the things that killed off HEDT was that mainstream platforms had CPUs going all the way up to 16 cores. HEDT still offered other shit besides high-core-count CPUs back when they were around. Plentiful PCIe lanes and higher-than-dual-channel RAM, just to name a couple things.
Sapphire Rapids came out in February for Intel - but it kinda sucks. Storm Peak is supposed to arrive this month for AMD (probably shipping next month). I looked at SR, but it was more expensive than Threadripper Pro (and was ~just~ its equal).
 
Basically mean my onboard graphics card on my i7 12700k and my dedicated video card ?

Any advantage? Like if you run Emby or Plex on your pc too just for a little while mostly.

I hear mixed things.
Onboard intel igpu supposedly good for video editing - if supported? Quick Sync?
 
Maybe it changed very recently but things like Plex on linux did support Intel iGPU QuickSync decoding very well and was less certain for AMD gpus, because of how much QuickSync and Intel iGPU are the most common GPU in the world I can imagine a couple of scenarios like that.

Some program of the Adobe suite over the years used Intel iGPU/quicksync for example even if you had a GPU in your system and would run much slower on CPU-GPU without quicksync.
 
Sapphire Rapids came out in February for Intel - but it kinda sucks.
As far as I know, Sapphire Rapids only exists in the server/workstation market segment. HEDT is a "prosumer" segment that sits in between mainstream platforms and workstation/server platforms. But hey, if Intel HEDT is set to return with Sapphire Rapids, cool shit.

Storm Peak is supposed to arrive this month for AMD (probably shipping next month).
Now Storm Peak I hadn't heard about. Very interesting. I really do hope regular prosumer Threadripper is coming back (Threadripper Pro wasn't really prosumer but something higher and more expensive).

I looked at SR, but it was more expensive than Threadripper Pro (and was ~just~ its equal).
Ah well shit, that sucks.

Apologies to all, not try to derail the thread here.
 
As far as I know, Sapphire Rapids only exists in the server/workstation market segment. HEDT is a "prosumer" segment that sits in between mainstream platforms and workstation/server platforms. But hey, if Intel HEDT is set to return with Sapphire Rapids, cool shit.
They did both. It’s just that the HEDT branch has a similar naming scheme (just quad channel and 64 lanes instead of full fat), and is overpriced. They needed about 15-20% more juice than the prior gen competitive - and they didn’t get it.
Now Storm Peak I hadn't heard about. Very interesting. I really do hope regular prosumer Threadripper is coming back (Threadripper Pro wasn't really prosumer but something higher and more expensive).
It is. Same as sapphire rapids it’s a unified naming scheme. Some HEDT, some workstation.
Ah well shit, that sucks.

Apologies to all, not try to derail the thread here.
Meh. That’s how it’s supposed to be!
 
PhysX baby. Driver not updated since 2017 so you know it's good.
originallol.jpg
 
I use the iGPU when the discrete one runs photogrametry calculations, otherwise the desktop is unresponsive.
 
My friend runs 6 displays on his PC, all of which are at least 1080p. He's using a Radeon RX 6900 XT, with a Radeon RX 570 as a secondary card to handle some of the extra monitors. I don't recall the specifics, but he said things didn't work out too well when trying to drive all 6 displays with one card (and one of those displays is a 4K 144Hz monitor too). So uuhh, I guess if you intend on using a shitload of monitors you might need more than one GPU. Although I coulda sworn Radeon GPUs were usually fine driving up to 6 displays in the past (like back during the Eyefinity days).

My friend's setup:

View: https://i.imgur.com/AlX5rGn.jpg


Around 2003/2004, I drove 4 random CRT's with an ATI AGP/PCI setup....

It was completely unnecessary but was def a nice flex at the time (one of the monitor's sole purpose was Winamp lol)
 
Here's a fun use case. Run Linux on a lower end GPU (or your iGPU). Create a Windows vrtual machine and do IOMMU passthrough to directly present your graphics card to the Windows VM.
 
  • Like
Reactions: Meeho
like this
I have machines at work where I setup 4 video cards. While not a true "desktop" setup, I can simulate doing running desktop level application using gpu for machine learning applications with just tweaks to use next available gpu.
 
I have machines at work where I setup 4 video cards. While not a true "desktop" setup, I can simulate doing running desktop level application using gpu for machine learning applications with just tweaks to use next available gpu.
Huh? Most motherboards don't have more than one PCIe x 16 - well, 3.0, 4.0 - or just two - and usually not the same. The other thing I thought happened - is that the 2nd card will force both to only run at x8?
 
Waiting for loooong motherboards so we can use six four slot cards. For a chassis we can use a repurposed LG refrigerator. Bonus here the extra space can hold beer! 🙃
 
Huh? Most motherboards don't have more than one PCIe x 16 - well, 3.0, 4.0 - or just two - and usually not the same. The other thing I thought happened - is that the 2nd card will force both to only run at x8?
Given he said work - I'm betting HEDT boards.
 
I'm surprised more games don't allow SLI or Crossfire anymore. Imagine combining 2,3 or even 4 high end GPUs and playing games at 8k at 140+ fps.
 
I'm surprised more games don't allow SLI or Crossfire anymore. Imagine combining 2,3 or even 4 high end GPUs and playing games at 8k at 140+ fps.
Would be sweet (I loved SLi back when it was king)... but it is all about money, and we were a small percentage to begin with. Now add consoles (which are low end PCs) and the percentage is even smaller. There is no motivation for game studios to program for it because DX12 requires the game developer to add it in, whereas DX11 NVidia was doing most of the work on a driver level.
 
I'm still pissed that Threadripper never saw anything past Zen 2 (except for Threadripper Pro). And what was Intel's last HEDT platform, X299? I don't know why HEDT disappeared, but it's dang annoying. I really hope both companies return to HEDT. I'm pretty sure one of the things that killed off HEDT was that mainstream platforms had CPUs going all the way up to 16 cores. HEDT still offered other shit besides high-core-count CPUs back when they were around. Plentiful PCIe lanes and higher-than-dual-channel RAM, just to name a couple things.
7000 series on Zen4 was annouced.
 
I'm surprised more games don't allow SLI or Crossfire anymore. Imagine combining 2,3 or even 4 high end GPUs and playing games at 8k at 140+ fps.
Because often times there was no point, or performance was not linear with the cards added, and buying multiple mid-range cards to get high end performance seldom worked out either.
 
  • Like
Reactions: Hulk
like this
What exactly does a game developer have to do to enable SLI? Doesn't sound like it could be that bad.

Any API documentation around?

ETA: only thing coming up is from 2011 for DX9 and DX10 https://developer.download.nvidia.com/whitepapers/2011/SLI_Best_Practices_2011_Feb.pdf
It's not really about "enabling" it, it's about how to leverage the GPU resources in order to actually speed up the game.

It turns out that syncing two cards to spit out frames is incredibly hard. And both bridges as well as over the bus isn't fast enough. The two cards don't share RAM, so that's another problem.

By the end, optimization became so difficult that tiles might receive as low as a 10-20% performance boost in terms of frame rate, but then might also receive a 30ms frametime penalty due to the overhead of SLI. And all that for the joy of double the GPU cost.
In a lot of ways, SLI is very similar to using DLSS 3.5 frame generation or FSR frame generation, in terms of latency penalty. Some game actually went negative over SLI. Especially in situations that were CPU limited (think of any competitive FPS game where high frame rate is desirable). This would show up as micro-stuttering.

The more complex way of doing it was figuring out how to have one or both GPU's share loads in a GPGPU way, rather than any form of interleaving frames. And it turns out that isn't is at all from an engine optimization standpoint. And when considering 99.9% of all gamers will only ever have a single GPU, wasting development resources on .1% of gamers (or less honestly) wasn't worth the money. DX12 was supposed to make it so multi-GPU wouldn't require special programming. But I don't think that that work from Microsoft has gone anywhere.

Someone like Dan_D or Kyle can definitely explain this better than I am, but that is the general gist.
 
Last edited:
What exactly does a game developer have to do to enable SLI? Doesn't sound like it could be that bad.

Any API documentation around?

ETA: only thing coming up is from 2011 for DX9 and DX10 https://developer.download.nvidia.com/whitepapers/2011/SLI_Best_Practices_2011_Feb.pdf
Well.......here we go.

If it was easy NVIDIA and AMD would still be doing it in order to sell us multiple GPU's. I'll try and keep this brief as I can. In the past, SLI and Crossfire/CrossfireX were enabled in the drivers and either required additional PCI-Express bandwidth or a separate, external bus for communication. However, the technology was significantly flawed. Firstly, it required the duplication of everything. This included VRAM. Having two GPU's in SLI or Crossfire didn't mean you doubled your RAM. Having two 12GB cards in your system didn't equal 24GB. You had 12GB per card. This is because all work had to be duplicated in each graphics cards frame buffer. So you couldn't take advantage of the increased amount of VRAM in your machine. This was limiting as SLI and Crossfire/X allowed you to push insanely high resolutions in games. (More on this later.)

Ultimately, there was added latency in the whole process that resulted in micro-stuttering. Your frame rates were certainly higher on paper, but you didn't get a linear increase in performance as you added GPU's. You almost never saw 100% scaling and if you did, it was only for the first additional GPU. Each one had diminishing returns over the second one and in the rare cases you saw a system running 4-Way SLI or CrossfireX that fourth GPU didn't really do anything. There were also limitations with the bandwidth of the SLI bridge connectors and Crossfire bridges as well. I ran into these myself. If you went beyond a certain resolution, there would be insufficient bandwidth over the bridge for the GPU's to use resulting in a lockup of the system.

There was also the issue of dual-GPU cards that actually had two GPU's on a single video card. This too had problems as it resulted in reduced clocks (due to heat and power requirements) compared to single-GPU cards used in tandem. These also depended on an internal SLI or Crossfire of sorts. Speaking of which, SLI and Crossfire bridges had to undergo multiple revisions to them to overcome their inherent bandwidth limitations. Essentially there were also two ways to take advantage of SLI. The first was to use high end cards which was expensive, even by today's standards in some cases. I spent around $1,400 or so for each of my Titan X's which makes an RTX 4090 seem like a bargain by comparison. Using two or more ultra-high end cards allowed you to push multimonitor resolutions even beyond 4K back in the day with varying degrees of success though it usually worked.

Your other option was to buy two midrange cards and SLI those together for high end performance. This often lead people to believe they would get performance on par with or better than that of a single high end card. This was dubious as their frame rates could be higher but they were limited on VRAM and had the problem of microstutter and the rest of the baggage that came with SLI and Crossfire. Ultimately, it led to worse user experiences and complications with the system than you would get with a single card.

From the driver standpoint, it was primarily on NVIDIA and ATi/AMD to implement the technology in their drivers. Profiles had to be written to support each game. In other words you had to wait for driver releases and constantly update them in order to support each game. That being said, you had an ability on NVIDIA drivers to define values yourself though it never worked as well as the NVIDIA created profiles did. AMD didn't allow this so you had to wait for them to release drivers. AMD driver releases were relatively rare comparatively meaning you could wait a long time for support on newer games. All that said, AMD and NVIDIA did their best to update their drivers so they could keep selling multiple GPU's.

However, all of this changed with the introduction of DirectX 12. Now, multi-GPU has to be implemented by game developers which basically killed the technology overnight. I can't speak to how difficult this is but with a focus on consoles and porting console games to PC, there is no desire on the part of game developers to put in the work for what amounted to a low single-digit percentage of users. There are additional types of implementations as part of DirctX 12 such as explicit multiGPU, but to my knowledge this has never been done. There are still SLI profiles in the drivers today, but few games benefit from it and at this point, only the highest end cards each generation have the required NV-Link connectors to do SLI. NV-Link having replaced SLI in the 3000 series. At various points, midrange adapters have sometimes allowed SLI or Crossfire without a separate bridge but this is no longer the case. I don't know off hand if DX12 level implementations require the bridge or not, but it doesn't matter as no developer has ever done it to my knowledge.

In other words, not so easy to implement. If it was super easy, game developers would do it. When it was left up to the GPU manufacturers they did do so until DX12 came out. Even then, popularity of SLI and Crossfire dwindled due to the microstutter and cost issues.
 
Last edited:
Back
Top