4GB on GK104 (GTX 670/GTX 680) Is Useless

Alright guys, I have some good news! It looks like it's a driver issue after all! I did a run in Heaven at 2560x1440 and then one at 4032x960 to engage Surround, scaled by the displays. Here are the results:

2560x1440
2560x1440_122_0_0_988mv_4500_106.3.jpg


4032x960
4032x960_122_0_0_988mv_4500_62.6.jpg


The run in Surround was having exactly the same GPU usage issues as the runs at 8064x1440. Also, the driver crashed during two runs out of three, at stock clocks.

I'm off to bed now, very happy about this! :D
 
I sense this is a driver issue, I seem to remember reading about quadSLI troubles here at some point.
 
I have two questions.
1) Could the lack of improved performance going from 2 way to 4 way SLI be just from heat? I mean I saw the pics of your system in the other thread I still find it difficult believing all four cards cool well. You did state that the GPUs where barely been used. Aren't they just throttling down from being hot? Would this be different if you were watercooled?

2) On the other hand, if you right about the bandwidth restricting the extra memory, am I safe to believe that the AMD counterparts with 3GB of memory, but with 384 memory bandwidth would (in quad crossfire) smoke you current setup? I know the [H] review of tri-fire vs 3 way SLI (http://www.hardocp.com/article/2012/04/25/geforce_gtx_680_3way_sli_radeon_7970_trifire_review/9) was plagued by driver issues, but the higher bandwidth and memory capacity of the HDs didn't seem to help.
 
I have two questions.
1) Could the lack of improved performance going from 2 way to 4 way SLI be just from heat? I mean I saw the pics of your system in the other thread I still find it difficult believing all four cards cool well. You did state that the GPUs where barely been used. Aren't they just throttling down from being hot? Would this be different if you were watercooled?

2) On the other hand, if you right about the bandwidth restricting the extra memory, am I safe to believe that the AMD counterparts with 3GB of memory, but with 384 memory bandwidth would (in quad crossfire) smoke you current setup? I know the [H] review of tri-fire vs 3 way SLI (http://www.hardocp.com/article/2012/04/25/geforce_gtx_680_3way_sli_radeon_7970_trifire_review/9) was plagued by driver issues, but the higher bandwidth and memory capacity of the HDs didn't seem to help.

It's not heat. The cards are designed to run this way on air. It'd be one thing if you had a 3-way or 4-way SLI of the two or three fan custom air cooled cards dumping hot air all over eachother - but these cards are designed to blow air out of the case. It's really not bad at all (my results linked here). I'm on my 2nd set of 4-way SLI 680s (had 2GB reference). I see no reason to water cool.

The 4GB cards may be slightly slower, but it's not a significant number. Also, if you play games or use applications that take advantage of the extra VRAM availability you will see a benefit (reference: BF3) at high resolutions (I game at 5760x1080). For 1440p/1080p/etc. - no...get 2GB (unless you want to "futureproof").

So far, it looks like PLX has not affected fomoz rig at all. Because his numbers are where you'd think they would be compared to a X79/680 rig.

I've got my 4th card out on cross-ship RMA with EVGA right now (ETA Monday, 7/16) so I'm not much use here. But if you guys read fomoz last post above - it looks like problem solved.
 
Have you tried running off different drivers? I had those same kind of issues with certain drivers running dual 690's.
 
Have you tried running off different drivers? I had those same kind of issues with certain drivers running dual 690's.
I thought about it, but I'm not sure what's the best way to roll back drivers. Also, how far back should I go? Any versions in particular I should try? Were you running 690's in Surround? Which drivers worked well for you?
 
Ahhh. I'm guessing that is your problem. I had a ton of issues getting quad SLI working properly, and most drivers just refused to work properly no matter what.

Do a full clean install:

Control panel - uninstall - uninstall each of the nVidia drivers by hand (and any OC utilities. You may have to do this several times, reboot, etc.)

Then, download and run driversweeper and have it clean all nvidia (and any AMD) drivers out.

Finally do a full install with 304.48 drivers.
 
Easier to just format Windows 7 and start over. I assume you installed a fresh build of Win7 on your box and applied 304.48? I didn't have that many driver issues (other than performance niggles that are normal with 4-way SLI) because I always format and go. Data drives for games so no problem reinstalling. NAS for other stuff.
 
I find the idea that the memory bus could be slowing the cards down to be quite suspect, but fomoz did mention that increasing memory clocks by 13% increased overall performance by 10%, which supports the argument.

Driver issues are much more likely though, as is running the set off of a single 16x PCIe 3.0 connection. I've seen a thread he referenced where people (including Vega!) were trying to get PCIe 3.0 working properly with four cards on an X79 board, but they have not been very successful yet, which lends further support to the idea of a driver issue.

On a side note, I have a gigabtye X79 UD5, and corresponding 2011 chip.

I ahve a GTX680 as well, and I wasnt aware that PCI express 3.0 functioned with the current Sandy Bridge chip.

Can someone please correct me if I am wrong?

Best Regards to all here.
 
On a side note, I have a gigabtye X79 UD5, and corresponding 2011 chip.

I ahve a GTX680 as well, and I wasnt aware that PCI express 3.0 functioned with the current Sandy Bridge chip.

Can someone please correct me if I am wrong?

Best Regards to all here.

Not exactly. Intel stated that Sandy Bridge E was PCIe 3.0 capable. At the time, there were no cards to test this theory with. AMD released Tahiti with PCie 3.0 support on X79...however, NVIDIA disabled this upon Kepler's release. They stated they were "certifying" for quite some time until just a couple of weeks ago when they announced they would not officially support X79, PCIe 3.0, and Kepler, yet they provided a workaround:

http://nvidia.custhelp.com/app/answers/detail/a_id/3135/~/geforce-600-series-gen3-support-on-x79-platform

The workaround requires some effort to properly implement (in my case: disable SLI, use only one monitor, enable, reboot, pray) but once working, it's solid.

For this situation, fomoz is on Z77 so he's PCIe 3.0 legit. I'm on X79 and I've had great luck with 4-way SLI Kepler and the fix (once I did the steps I outlined above). I almost moved to Z77 because of this fiasco. Seeing fomoz's benches in gaming make me kind of wish I did. Although Ivy Bridge E in 2013 should fix this. ;)
 
Ahhh. I'm guessing that is your problem. I had a ton of issues getting quad SLI working properly, and most drivers just refused to work properly no matter what.

Do a full clean install:

Control panel - uninstall - uninstall each of the nVidia drivers by hand (and any OC utilities. You may have to do this several times, reboot, etc.)

Then, download and run driversweeper and have it clean all nvidia (and any AMD) drivers out.

Finally do a full install with 304.48 drivers.
You're saying that you had Surround giving you proper scaling with 304.48 in quad-SLI? I think I had those drivers before for 2-way SLI, but I might have upgraded to 304.79 as soon as I had 4-way SLI setup.
 
It was as proper as I could get. Quad SLI drivers are really lacking. 304.79 crapped my cards out just like yours are doing (low gpu usage, low voltage, etc.) Do a full clean install of 304.48 and see what happens.

And yes, 6000x1080 surround.
 
Just to throw this out there, but didn't Nvidia claim no driver support for 4-way SLI on GTX 670 even though they advertised the card as such? I don't know if it has been addressed in the recent drivers, but I remember being very confused about it when I got my 670 and it said in big bold icons on the box that it was 4-way capable and then the drivers don't properly support it except for the 680/690.
 
It was as proper as I could get. Quad SLI drivers are really lacking. 304.79 crapped my cards out just like yours are doing (low gpu usage, low voltage, etc.) Do a full clean install of 304.48 and see what happens.

And yes, 6000x1080 surround.
As far as I can see from Heaven and 3DMark11, I get excellent scaling in 4-way SLI as long as I don't use Surround. I'll try the 304.48 drivers tonight, thanks!
 
Why would that matter? It's the same memory bandwidth, right? The GPUs are talking to each other through the PLX chip instead of the motherboard/SLI connector, but I wouldn't think that would make much difference. Or are you talking about a 2GB/4GB difference?

After I posted that I went to play Hard Reset and then crashed for the night. Having trouble remembering why now but I knew it had the same bandwidth and still had my reasons for asking.

This may have been the reason, but I can't be sure.

If the 2GB models need more VRAM to handle the larger resolution and the 690's are the same but with 4GB then the 4GB is wasted on the 690's just as much as the 670's/680's and should have the same problem as you with the 256-bit. If the 690's do not reflect the same issue you are having then that could indicate a specific issue with 670's/680's 4-way SLI and not the bandwidth.

Just to throw this out there, but didn't Nvidia claim no driver support for 4-way SLI on GTX 670 even though they advertised the card as such? I don't know if it has been addressed in the recent drivers, but I remember being very confused about it when I got my 670 and it said in big bold icons on the box that it was 4-way capable and then the drivers don't properly support it except for the 680/690.

Nvidia added support for 4-way SLI with the 301.42 WHQL driver.
 
If the 2GB models need more VRAM to handle the larger resolution and the 690's are the same but with 4GB then the 4GB is wasted on the 690's just as much as the 670's/680's and should have the same problem as you with the 256-bit. If the 690's do not reflect the same issue you are having then that could indicate a specific issue with 670's/680's 4-way SLI and not the bandwidth.

690s still have 2GB per GPU, just like 680 SLI. It just sits on a single card instead of two different cards.
 
It was as proper as I could get. Quad SLI drivers are really lacking. 304.79 crapped my cards out just like yours are doing (low gpu usage, low voltage, etc.) Do a full clean install of 304.48 and see what happens.

And yes, 6000x1080 surround.
Installed 304.48, same problem. [strike=Option]I guess I play on single screen until Surround gets fixed.[/s] Yeah, right. I can't go back and play on only one screen, what the hell was I thinking? I'm gonna have to tough it out for now. Luckily, even though performance in Surround isn't great right now, it's still pretty smooth.
 
Last edited:
Installed 304.48, same problem. I guess I play on single screen until Surround gets fixed. Yeah, right. I can't go back and play on only one screen, what the hell was I thinking? I'm gonna have to tough it out for now. Luckily, even though performance in Surround isn't great right now, it's still pretty smooth.

If you think that it's a driver issue, and you didn't nuke it to metal and rebuild (it sucks I know!), then you haven't tried everything :).

Thinking I had a driver issue with my current build, I did this at least 10 times in one day, using a fast USB drive and installing to an SSD. It's faster than you think.
 
On a side note, I have a gigabtye X79 UD5, and corresponding 2011 chip.

I ahve a GTX680 as well, and I wasnt aware that PCI express 3.0 functioned with the current Sandy Bridge chip.

Can someone please correct me if I am wrong?

Best Regards to all here.

It's one of the fundamental differences in design that Intel introduced with LGA1156- the integrated PCIe lanes on the CPU. LGA1366 with X58 and LGA2011 with X79 don't have this, and instead they're all on the chipset, so the CPU really has nothing to do with PCIe 3.0 capability.
 
If you think that it's a driver issue, and you didn't nuke it to metal and rebuild (it sucks I know!), then you haven't tried everything :).

Thinking I had a driver issue with my current build, I did this at least 10 times in one day, using a fast USB drive and installing to an SSD. It's faster than you think.
There are only two driver versions, though. Dragon Age II is very playable at around 45 fps and Dirt 3 doesn't go below 60 fps, so I'll wait and see what nVidia says :)
 
I found DA2 to be a VRAM hog, but not so much of a shader hog, and I I don't believe that Dirt 3 is terribly taxing. It's just there to represent a modern driving game, I usually skip over it in reviews.

I still think you should take the hour to nuke it to metal if you're not seeing what you should be. You're doing yourself, and us, an injustice by testing on a 'dirty' setup. Good reviewers would frown on you :).
 
I found DA2 to be a VRAM hog, but not so much of a shader hog, and I I don't believe that Dirt 3 is terribly taxing. It's just there to represent a modern driving game, I usually skip over it in reviews.

I still think you should take the hour to nuke it to metal if you're not seeing what you should be. You're doing yourself, and us, an injustice by testing on a 'dirty' setup. Good reviewers would frown on you :).
I'm not doing going to be posting any results in Surround just yet, definitely not when it's working like this :p I just want to play some games for now.
 
Not exactly. Intel stated that Sandy Bridge E was PCIe 3.0 capable. At the time, there were no cards to test this theory with. AMD released Tahiti with PCie 3.0 support on X79...however, NVIDIA disabled this upon Kepler's release. They stated they were "certifying" for quite some time until just a couple of weeks ago when they announced they would not officially support X79, PCIe 3.0, and Kepler, yet they provided a workaround:

http://nvidia.custhelp.com/app/answers/detail/a_id/3135/~/geforce-600-series-gen3-support-on-x79-platform

The workaround requires some effort to properly implement (in my case: disable SLI, use only one monitor, enable, reboot, pray) but once working, it's solid.

For this situation, fomoz is on Z77 so he's PCIe 3.0 legit. I'm on X79 and I've had great luck with 4-way SLI Kepler and the fix (once I did the steps I outlined above). I almost moved to Z77 because of this fiasco. Seeing fomoz's benches in gaming make me kind of wish I did. Although Ivy Bridge E in 2013 should fix this. ;)

OMG thank you for making that easy for me! Now for some tests!!! YEY
 
It's one of the fundamental differences in design that Intel introduced with LGA1156- the integrated PCIe lanes on the CPU. LGA1366 with X58 and LGA2011 with X79 don't have this, and instead they're all on the chipset, so the CPU really has nothing to do with PCIe 3.0 capability.

Can you back this up for X79? I'm quite positive it's tied to CPU - IE it will be "fixed" with Ivy Bridge E. Perhaps I am wrong.
 
Can you back this up for X79? I'm quite positive it's tied to CPU - IE it will be "fixed" with Ivy Bridge E. Perhaps I am wrong.

I shouldn't have to. No offense, but to me this is absolutely common knowledge about these chipsets. Maybe I read too much :).

Essentially, the CPU is attached to one side of an X58/X79 PCH?, and the PCIe lanes are on the other. The CPUs use some Hyper-transport-like interlink to talk to the PCH, and to other CPUs in Xeon multi-socket solutions.

That means that the PCH determines what PCIe version is supported across however many lanes, not the CPU as in LGA1156 and LGA1155. It's a matter of BIOS and driver support.
 
Thanks. So Z77 - tied to CPU (use i5 2500k = PCIe 2.0; use i7 3770k = PCIe 3.0), but X79 is based on the chipset. This is why I was confused.
 
Thanks. So Z77 - tied to CPU (use i5 2500k = PCIe 2.0; use i7 3770k = PCIe 3.0), but X79 is based on the chipset. This is why I was confused.

Yup. Any PCIe version limitation on X79, if it's real, is imposed by Intel. To corroborate the above, many Z68 boards would run PCIe 3.0 with an Ivy CPU. There has to be BIOS and voltage regulation support for it, but the chipset (formerly called the MCH for media controller hub) has nothing to do with the main PCIe lanes, just the four it has for other components.
 
If you think that it's a driver issue, and you didn't nuke it to metal and rebuild (it sucks I know!), then you haven't tried everything :).

Thinking I had a driver issue with my current build, I did this at least 10 times in one day, using a fast USB drive and installing to an SSD. It's faster than you think.

I did this last night (upgraded to 2x Samsung 830 256GB in RAID 0). Installed 304.48. BF3 performance is leagues better. Thanks for the tip RE: 304.48, Homer. I'm a diehard "bleeding edge is best" guy so sometimes I just need to be re-told to try the last version. :)

Nuking it...the only way to be sure. Gotta love USB sticks plus SSD!
 
Glad it's working better. .79 drivers really blew for quad sli. Low gpu usage, fluctuating/low voltages, weird things.
 
Glad it's working better. .79 drivers really blew for quad sli. Low gpu usage, fluctuating/low voltages, weird things.

Would have to agree. .79 and .48 were both very buggy for my tri-sli.

Everytime I exit a game, The GPU Clocks would stay in 3d mode.....no matter what I tried they would stay at 3dmode clocks.

Back to 301.42 for me
 
upgraded to 2x Samsung 830 256GB in RAID 0
Please don't tempt me :) I have a similar setup in my laptop, all I can say is wow. I'm never going to [be able to] buy anything less [ridiculous] again.
 
It's one of the fundamental differences in design that Intel introduced with LGA1156- the integrated PCIe lanes on the CPU. LGA1366 with X58 and LGA2011 with X79 don't have this, and instead they're all on the chipset, so the CPU really has nothing to do with PCIe 3.0 capability.

Um, LGA2011 has 40 integrated PCI-E lanes, which is part of the 444-pin increase over its predecessor, LGA1567 (Xeon X7000). The actual issue was like what was posted before, that Intel did not have any PCI-E 3.0 devices to test compatibility certification (amid rumors that PCI-E 3.0 was broken in the first revisions of SB-E).

The X79 PCH has an extra 8 (+4 disabled) PCI-E 2.0 lanes.
 
Um, LGA2011 has 40 integrated PCI-E lanes, which is part of the 444-pin increase over its predecessor, LGA1567 (Xeon X7000). The actual issue was like what was posted before, that Intel did not have any PCI-E 3.0 devices to test compatibility certification (amid rumors that PCI-E 3.0 was broken in the first revisions of SB-E).

The X79 PCH has an extra 8 (+4 disabled) PCI-E 2.0 lanes.

You're forgetting something here- it's not the CPU that has the lanes, it's the 'northbridge', which sits between the CPU and the PCH.

Those extra pins were for another memory channel and power for 8-cores.
 
I did this last night (upgraded to 2x Samsung 830 256GB in RAID 0). Installed 304.48. BF3 performance is leagues better. Thanks for the tip RE: 304.48, Homer. I'm a diehard "bleeding edge is best" guy so sometimes I just need to be re-told to try the last version. :)

Nuking it...the only way to be sure. Gotta love USB sticks plus SSD!

I'm very glad that worked out for you!
 
You're forgetting something here- it's not the CPU that has the lanes, it's the 'northbridge', which sits between the CPU and the PCH.

Those extra pins were for another memory channel and power for 8-cores.

The north bridge is part of the CPU now, ever since First gen I series based on 1156. There is no isolation of the PCIe lanes form the CPU. Nothing in between them. The additional pins on the socket are for more PCIe lanes.

Once again, the PCIe lanes on Lynnfield and newer Intel Processors are directly on the CPU die. Not on the chipset.
 
The north bridge is part of the CPU now, ever since First gen I series based on 1156. There is no isolation of the PCIe lanes form the CPU. Nothing in between them. The additional pins on the socket are for more PCIe lanes.

Once again, the PCIe lanes on Lynnfield and newer Intel Processors are directly on the CPU die. Not on the chipset.

I thought I had heard that somewhere. Now I'm confused even more. :)

So - PCIe capabilities on X79 = derived from CPU in a similar fashion as Z77?
 
I thought I had heard that somewhere. Now I'm confused even more. :)

So - PCIe capabilities on X79 = derived from CPU in a similar fashion as Z77?

Correct. the only time this is not the case is when there is a multiplexing or swtiching chip to give you "more" lanes.
 
I thought I had heard that somewhere. Now I'm confused even more. :)

So - PCIe capabilities on X79 = derived from CPU in a similar fashion as Z77?

I'm going to have to issue an apology here, and I'm definitely sorry!

The X58 chipset absolutely has the PCIe lanes attached, as opposed to the LGA1366 CPU.

The X79 chipset has only eight PCIe 2.0 lanes, while the LGA2011 CPU has forty lanes.

I made the incorrect assumption that X79 was X58's equal in design topology and proceeded to convince people that I was correct, and I apologize for that. Sorry!
 
OMG thank you for making that easy for me! Now for some tests!!! YEY


It doesn't seem to work on my X79 UD5. The screen just disappears, and still PCI express 2.0.

I wonder whats wrong, Ive tried a few different drivers (official and beta) Very strange.
 
It doesn't seem to work on my X79 UD5. The screen just disappears, and still PCI express 2.0.

I wonder whats wrong, Ive tried a few different drivers (official and beta) Very strange.

It requires some kid gloves and tweaking - hence why it is not supported. Try it with SLI disabled and only one display connected..also you can try a fresh Windows 7 install. All of those factors worked for me. Anything else - I had hangs on boot/etc.
 
Back
Top