NVIDIA GPU Generational Performance Part 3 @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,651
NVIDIA GPU Generational Performance Part 3

In Part 3 of our NVIDIA GPU Generational Performance article we are going to compare the GeForce GTX 780 Ti upgrade advantage from GeForce GTX 780, and GeForce GTX 980 Ti from GeForce GTX 980 and GeForce GTX 1080 Ti from GeForce GTX 1080. We will see how much each "Ti" version offered in terms of a performance upgrade.

If you like our content, please support HardOCP on Patreon.
 
Last edited:
I LOVED this series of articles, keep them coming Kyle...I tell you what, if you wanna do AMD and wanna go further back, I should have a couple Radeon 5850's laying around I can send. A Cypress->Cayman->Tahiti->Hawaii-Fiji->Polaris->Vega(...or just Hawaii, Fiji and Vega to be realistic...)series would be cool.

Then a CPU one showing the arduously glacial and gradual path of performance increase from the 2600k-9900k.
 
I too liked the series. One suggestion would be include benchmarks other than games. There was a lot of tech involved in each new generation, and I would be curious to see how the cards differentiate when doing 3d rendering, adobe premiere acceleration, etc.
 
I LOVED this series of articles, keep them coming Kyle...I tell you what, if you wanna do AMD and wanna go further back, I should have a couple Radeon 5850's laying around I can send. A Cypress->Cayman->Tahiti->Hawaii-Fiji->Polaris->Vega(...or just Hawaii, Fiji and Vega to be realistic...)series would be cool.

Then a CPU one showing the arduously glacial and gradual path of performance increase from the 2600k-9900k.

Right now 5 years is a good place to start, which is the 290X I will be using. To go further back wouldn't be fair to the NVIDIA comparison (want to keep both articles showing the same years of history) and honestly anything below 290X is going to be way to slow in modern games, even at low settings to show results. Look at how bad the 780 did in modern games, we had to test at low settings, and even then sometimes that wasn't low enough! So it would be hard to show any real results with modern games with hardware 5+ years old. There has to be a real-world stopping point. Otherwise, it'd take a month to put together an article :p Gotta draw the line somewhere. But I appreciate the feedback. We will be comparing Hawaii, Fiji, Polaris and Vega which is the most relevant today. We wiill be comparing the "high-end" which gets a little weird with AMD, and the "mid-range" with polaris, so a wide variety will be tested.
 
I am surprised that the 1080 holds around a 30% lead over the 980ti. I remember when the 1080 first came out, the difference wasn't quite as large between the two.

Once the 2080 comes out, it will be time to replace my 980ti with hopefully a 1080ti, but at a minimum the 1080.
 
This makes me want to see the performance differences of cards from the same tier budget bracket segments from different generations as well to see how those stack up compared to the differences of going from a lower or mid range card to the next step up ones. Basically I'm curious if the performance gains deltas are similar or much steeper between GPU market segment brackets than they are for generational improvements between the most closely appropriate bracket. Ideally they'd be similar, but something tells me it's probably a larger gap between the GPU market segments to force people's hands into buying a even more expensive card. Another words let's just milk them further each upgrade bracket they move towards or make them wait a eternity for much more paltry and meager generational improvements.

For example how does the performance differences between 780/980/1080 compare versus 1070ti/1080/1080ti in terms of % gains?
 
Right now 5 years is a good place to start, which is the 290X I will be using. To go further back wouldn't be fair to the NVIDIA comparison (want to keep both articles showing the same years of history) and honestly anything below 290X is going to be way to slow in modern games, even at low settings to show results. Look at how bad the 780 did in modern games, we had to test at low settings, and even then sometimes that wasn't low enough! So it would be hard to show any real results with modern games with hardware 5+ years old. There has to be a real-world stopping point. Otherwise, it'd take a month to put together an article :p Gotta draw the line somewhere. But I appreciate the feedback. We will be comparing Hawaii, Fiji, Polaris and Vega which is the most relevant today. We wiill be comparing the "high-end" which gets a little weird with AMD, and the "mid-range" with polaris, so a wide variety will be tested.

I would disagree slightly with this, personally i think you should at least include the 280x/7970ghz because it's actually quite decent in modern games and has held up much better than the 780.

Still gaming quite happily with mine.
 
Last edited:
Thanks, for the fun and nostalgic read guys. I've had 2 TI's so far in my life, a pair of 560TI's and the 1080TI presently in my 1440p rig. Outside of the possibility of AMD coming out with some surprise I don't believe I'll ever go less than a x80ti again. This article just shows how pretty much every gen is a giant killer. As much as I loved my 970SLI back in the day if I'd held out for a 980TI that in turn would've kept me from putting money into the 1080SLI. A single TI will hold most over from other compromises for a better buy.
 
Thanks, for the fun and nostalgic read guys. I've had 2 TI's so far in my life, a pair of 560TI's and the 1080TI presently in my 1440p rig. Outside of the possibility of AMD coming out with some surprise I don't believe I'll ever go less than a x80ti again. This article just shows how pretty much every gen is a giant killer. As much as I loved my 970SLI back in the day if I'd held out for a 980TI that in turn would've kept me from putting money into the 1080SLI. A single TI will hold most over from other compromises for a better buy.

Rule of thumb looks like the xx80-Ti from the last generation more or less is equal to the xx70 for the next. The GTX-1070 was more or less as fast as the GTX-980-Ti, and the GTX-970 is comparable to the GTX-780-Ti. So, going by that rule of thumb, the RTX-2070 will be as fast as the GTX-1080-Ti.
 
Brent and Kyle, mega kudos for these articles, looking forward to the AMD side of things!

giphy.gif
 
Bearing the price tag that it did, I was a bit reluctant to get the 980 Ti over the 980 as an upgrade to my 780 SLI setup. Seeing these results makes it completely justified.

Thanks for putting in the time and effort to do these awesome reviews, [H]!
 
Bearing the price tag that it did, I was a bit reluctant to get the 980 Ti over the 980 as an upgrade to my 780 SLI setup. Seeing these results makes it completely justified.

Thanks for putting in the time and effort to do these awesome reviews, [H]!

I'd grabbed a 970 to upgrade 670 SLI- and then went 970 SLI. Worked better than I would have expected, but I'd bought the second card before the 980Ti was released, and I sure wished I'd waited!

[still do...]
 
Right now 5 years is a good place to start, which is the 290X I will be using. To go further back wouldn't be fair to the NVIDIA comparison (want to keep both articles showing the same years of history) and honestly anything below 290X is going to be way to slow in modern games, even at low settings to show results. Look at how bad the 780 did in modern games, we had to test at low settings, and even then sometimes that wasn't low enough! So it would be hard to show any real results with modern games with hardware 5+ years old. There has to be a real-world stopping point. Otherwise, it'd take a month to put together an article :p Gotta draw the line somewhere. But I appreciate the feedback. We will be comparing Hawaii, Fiji, Polaris and Vega which is the most relevant today. We wiill be comparing the "high-end" which gets a little weird with AMD, and the "mid-range" with polaris, so a wide variety will be tested.

i would recommend the addition of tahiti 7970ghz/r9 280x it is still quite capable of 1080P gaming.. it destroy the GTX 780 in quite few games, even if at their launch the competition was the GTX 680
 
  • Like
Reactions: N4CR
like this
i would recommend the addition of tahiti 7970ghz/r9 280x it is still quite capable of 1080P gaming.. it destroy the GTX 780 in quite few games, even if at their launch the competition was the GTX 680
We can still recall when people recommended the 780Ti over the R9 290X? well..the new architecture was rendered obsolete once maxwell launched for some reason
 
  • Like
Reactions: N4CR
like this
Right now 5 years is a good place to start, which is the 290X I will be using. To go further back wouldn't be fair to the NVIDIA comparison (want to keep both articles showing the same years of history) and honestly anything below 290X is going to be way to slow in modern games, even at low settings to show results. Look at how bad the 780 did in modern games, we had to test at low settings, and even then sometimes that wasn't low enough! So it would be hard to show any real results with modern games with hardware 5+ years old. There has to be a real-world stopping point. Otherwise, it'd take a month to put together an article :p Gotta draw the line somewhere. But I appreciate the feedback. We will be comparing Hawaii, Fiji, Polaris and Vega which is the most relevant today. We wiill be comparing the "high-end" which gets a little weird with AMD, and the "mid-range" with polaris, so a wide variety will be tested.
will ya be testing to R 9 nano ..i always wanted one of those .. till i bought my GTX 1060
 
Once the AMD series are all done, would love to see a final article pulling all of this together!
 
Right now I'm running a 960 and I have been chomping at the bit to upgrade. I will wait for the 2000 series. If it is the 2070, 2080 or 2080ti, I know that I will be happy with the upgrade.
 
I can't wait for the AMD ones, love how focus each part was which make it crystal clear the gaming performance over the generations. Great foundation to evaluate next generation upcoming.
 
These articles have been a great read so far. I've grown up using AMD cards so I'm looking forward to AMD articles.

My next card will be nvidia though.
 
We can still recall when people recommended the 780Ti over the R9 290X? well..the new architecture was rendered obsolete once maxwell launched for some reason
Lol I bought my 2ndgen 290x dcu2 with a year warranty left from a guy who happily told me he was 'upgrading' to a 780ti. He lost out on that one.. Especially as games reached the 3-4gb range at 1080 and 1440.
 
Some of the performance hits on the older cards could be due to drivers. Nvidia has been known to degrade the performance of their previous generation of cards (with the use of drivers), upon the launch of new ones. Just saying.
 
Some of the performance hits on the older cards could be due to drivers. Nvidia has been known to degrade the performance of their previous generation of cards (with the use of drivers), upon the launch of new ones. Just saying.

Might want to try providing 'known' benchmarks that support your statement.
 
Some of the performance hits on the older cards could be due to drivers. Nvidia has been known to degrade the performance of their previous generation of cards (with the use of drivers), upon the launch of new ones. Just saying.
No they haven't.
 
Just do a quick search for "Nvidia drivers bricking cards". There's no shortage of information out there.
And you can also give this a watch.

This very website has done articles on performance over time that debunk your claim. Other legitimate websites have also debunked it. There are a lot of things to criticize NVidia about. Deliberately crippling their card's performance isn't one of them.
 
Can we get one more graph done for Far Cry 4 and the Witcher 3 with settings that match the AMD runs? Hairworks and I think the HBAO settings were different on the Witcher 3 between the AMD and nVidia runs. Far Cry 4, different terminology was used on the graphs, so the settings might have actually been the same, if that can be clarified please?

I know the performance in the Witcher 3 will probably be crappy, but having the exact settings used across all the cards will make a nice overall grand comparison possible.

Thanks, really loving these series of reviews :)
 
Last edited:
I just scroll up and down but it would have been cool to have all the graphics cards on 1 single chart so it's easier for people to compare what they really get from going from one card to another. To be clear, I understand the information is all there, but I think it might have been easier to compare in one single cohesive chart.
 
I'll take the under.

Depends on if DLSS turns out to be a real thing... if yes, over 30% will be ez. If DLSS turns out to be a flop or graphics quality isn't up to snuff, 30% might be tough.

Also depends on if we're talking raw rasterized performance or ray tracing shit turned on.
 
Also depends on if we're talking raw rasterized performance or ray tracing shit turned on.

As said before, ray tracing performance on a pre-Turing GPU is exactly 0FPS ;)

That means that Turing is not slower than Pascal, but that in ray tracing, it is *infinitely faster!

As far as raw rasterization performance, <30% at same settings (pre DLSS) would be a disappointment, but certainly not make the product in any way a failure, given what else it brings to the fold.


*[depends on the individuals' theological position on the division of a quantity by zero]
 
As far as raw rasterization performance, <30% at same settings (pre DLSS) would be a disappointment, but certainly not make the product in any way a failure, given what else it brings to the fold.

At a certain point we have to admit we're standing on the shoulders of giants and keeping up some incredibly fast cadence on improvements is tough. So I agree, definitely not a failure, it's just life
 
Depends on if DLSS turns out to be a real thing... if yes, over 30% will be ez. If DLSS turns out to be a flop or graphics quality isn't up to snuff, 30% might be tough.

Also depends on if we're talking raw rasterized performance or ray tracing shit turned on.

It would still be impossible to do an apples to apples comparison with the same AA settings though (while taking advantage of DLSS), unless there was agreement that DLSS is equivalent to Nx AA of another type. The better AA techs now can be costly, so it might look like you're intentionally tanking the performance of the older card to compare against a new tech. It's a tricky situation, and would require the additional of screenshots (which could also be cherry picked in a positive/negative fashion) from each benchmark run so people could compare the AA efficacy between the two techs.

With this generation and the new tech it brings, frame rates alone don't really get the job done and require a lot more "user perception/enjoyment" metrics for people to determine whether it's worth the upgrade or not. Ray tracing is definitely in that bucket as it will change the way a game is played, regardless of frame rate improvements. It's like having a suped-up civic that can go as fast as a luxury car. You may have the same speed (FPS), but the latter and the additional tech it brings to the table changes the entire experience.

As said before, ray tracing performance on a pre-Turing GPU is exactly 0FPS ;)

That means that Turing is not slower than Pascal, but that in ray tracing, it is *infinitely faster!

As far as raw rasterization performance, <30% at same settings (pre DLSS) would be a disappointment, but certainly not make the product in any way a failure, given what else it brings to the fold.


*[depends on the individuals' theological position on the division of a quantity by zero]

I agree with what you're saying, but sadly everyone is proclaiming what a failure it is, when only 1/3 of the GPU is being used in these (far too early released IMO) reviews. Some large applications that people aren't taking into account is VR, and render compute tasks. I pre-ordered a Pimax 8K X, for instance, and it was being developed to specifically take advantage of / require a Turing GPU. Render compute is admittedly going to minimally affect consumers directly, but will have a huge impact on the entertainment industry. OTOY (the ones behind OctaneRender/ORBX/Render Token) have said that the early benchmarks were close to 10x that of pascal for ray tracing, and a lot of existing shader workloads can be offloaded to the tensor/RT cores, hopefully by just the game engines themselves being patched.
 
Back
Top