2080 Ti for $1000 confirmed, HDMI 2.1/VRR confirmed dead for another year

Remember when people were subtitling the valkyrie tom cruise movie during the hitler outrage scene and nvidia was one someone did? Need to see if anyone did one for turing yet but no time atm
 
Remember when people were subtitling the valkyrie tom cruise movie during the hitler outrage scene and nvidia was one someone did? Need to see if anyone did one for turing yet but no time atm

Wrong movie. You're thinking of the movie Downfall and not that Tom Cruise turd.
 
No bro.. Valkrie this isnt the one I was referring to but same scource movie ..now we need a turing version lol




 
Last edited:
I will wait for 7nm cards. I still have several other upgrades that I need to do anyway that hold better value since I am barely gaming these days anyway since I have a lot of work to do. For my in between fix I will do some case modding that I left in the middle.

Next gen will be a more substantial upgrade over my 1080Ti and software support will be better.

Yeah, I just yesterday bought a used 1080TI after seeing the pricing and Nvidia's lack of real detail on the non-ray tracing improvements to the cards. Ray Tracing is a ways off from inclusion in typical games, and I don't really care about any of the near-launch titles, especially not for just one tech checkbox. It is worrying they felt it necessary to spend so much time patting themselves on the back but couldn't bother with any performance figures until everyone called them out on that, and even their response there was vague.

1080TI is going to be a monster for most everything for a while. See what the market does to Nvidia's money grab with these new cards, and how ray tracing matures. I love being on the bleeding edge for most things, but Nvidia just doesn't have my trust any more and their announcement and handling of real details and NDAs isn't improving things.
 
The vag-rash is real in this thread...

Nvidia has done an excellent job putting new technology into Turing that will actually get used in games, especially while they have zero competition in this space.

I don't expect ray-tracing to be ubiquitous like AA and AO are today, but the hardware and driver software is there now and it works- and that's what we need to get started!
 
I think nvidia is like suggesting "looky here fools..pascal is still good for whats out, heres something to mess with while we improve on Turing/Dr and move towards 7nm..but we want $$$$$ ...more than usual... because we have a lot of work left to do!! And its expensive!!"
 
I think nvidia is like suggesting "looky here fools..pascal is still good for whats out, heres something to mess with while we improve on Turing/Dr and move towards 7nm..but we want $$$$$ ...more than usual... because we have a lot of work left to do!! And its expensive!!"



Jay nailed it. Get some cream if you need it for the burn.
 
I'm getting a 2080 Ti. Evga XC Ultra one. Hopefully it does the 0 RPM fan thing of the Gigabyte 1080 it's replacing. 2x the memory bandwidth and 50% more flops should be substantially faster. If games where the frame rate aren't super important make good use of ray tracing awesome.
 
I think the 20 series is going to show its true muscle at 4k against the 10 series, this is without the raytracing. I can also see 1440 being the new resolution where things become CPU limited due to immense power of the new graphics cards.
 
I think the 20 series is going to show its true muscle at 4k against the 10 series, this is without the raytracing. I can also see 1440 being the new resolution where things become CPU limited due to immense power of the new graphics cards.

I think we'll start to see more 'stratification' in CPU performance, though hopefully we'll still see the 'slower/wider' CPUs putting out great frametimes!
 
I think the 20 series is going to show its true muscle at 4k against the 10 series, this is without the raytracing. I can also see 1440 being the new resolution where things become CPU limited due to immense power of the new graphics cards.

how long do you think that will take Nvidia another five years to get it right because it took them ten years for ray tracing. when you turn on ray tracing you will lose some performance of the card in 4k just like the 1080 ti now with 11GB of ram at 60hz that's if you get 60hz in some games the RTX will be working overtime and CTD
 
It’s pretty obvious to me what nvidia did here and I’m surprised others haven’t mentioned it. Nvidia has stopped being a PC gaming company several years ago and today they design GPUs primarily for the AI/Datacenter market in mind and its where their R&D is focused as that is where they are seeing explosive growth. So naturally Volta and Turing were massive dies focused on those markets to get ahead of Intel and because PC gaming is still a big chunk of their revenue, they repurposed Turings useless silicon like the Tensor cores for stuff like DLSS which will hardly ever get used. Even the ray tracing part was meant for companies like Pixar so they could buy Quadro systems vs Intel.

Nvidia needed a way to sell their failed Quadro cores and here we are left with a gigantic die that now costs $1300 after tax for a 2080Ti with features we can’t really use or want to use based on performance metrics that have been shown. Don’t get your hopes up of AMD doing any better because they’re busy with Ryzen and they too will copy nvidias strategy because it makes money.

Yes this isn’t anything new, nvidia has traditionally used cut down pro cards but we never had to deal with extra largely useless silicon like tensor cores on a consumer GPU and be stuck paying for it.

Put it this way: would you take Turing with it’s useless tensor and rtx cores at 750+mm or a refined Pascal chip at that size without the tensor and rtx cores? I know which I’d pick.
 
It’s pretty obvious to me what nvidia did here and I’m surprised others haven’t mentioned it. Nvidia has stopped being a PC gaming company several years ago and today they design GPUs primarily for the AI/Datacenter market in mind and its where their R&D is focused as that is where they are seeing explosive growth. So naturally Volta and Turing were massive dies focused on those markets to get ahead of Intel and because PC gaming is still a big chunk of their revenue, they repurposed Turings useless silicon like the Tensor cores for stuff like DLSS which will hardly ever get used. Even the ray tracing part was meant for companies like Pixar so they could buy Quadro systems vs Intel.

Nvidia needed a way to sell their failed Quadro cores and here we are left with a gigantic die that now costs $1300 after tax for a 2080Ti with features we can’t really use or want to use based on performance metrics that have been shown. Don’t get your hopes up of AMD doing any better because they’re busy with Ryzen and they too will copy nvidias strategy because it makes money.

Yes this isn’t anything new, nvidia has traditionally used cut down pro cards but we never had to deal with extra largely useless silicon like tensor cores on a consumer GPU and be stuck paying for it.

Put it this way: would you take Turing with it’s useless tensor and rtx cores at 750+mm or a refined Pascal chip at that size without the tensor and rtx cores? I know which I’d pick.

I like it. Very cynical. The only flaw I see is the gaming volume vastly outpaces the quadro market. If it served them better it’s be no problem to have that 1/3 of the die CUDA cores rather than RTX with the volumes.

My cynical theory is nVidia is using their expertise in areas AMD and Intel can’t follow. They execute just DLSS well and the 2080ti is 2x the perf of a 1080ti... how do you compete with that? It’s kind of like AMD with Mantle but nVidia has the people/tech to do it.
 
I like it. Very cynical. The only flaw I see is the gaming volume vastly outpaces the quadro market. If it served them better it’s be no problem to have that 1/3 of the die CUDA cores rather than RTX with the volumes.

My cynical theory is nVidia is using their expertise in areas AMD and Intel can’t follow. They execute just DLSS well and the 2080ti is 2x the perf of a 1080ti... how do you compete with that? It’s kind of like AMD with Mantle but nVidia has the people/tech to do it.

I think Intel and AMD will both follow nvidia but probably not catch them in performance in the professional market. Yeah nvidias pc gaming is still a huge chunk of their revenue but their long term growth is focused elsewhere now with the PC gaming side getting the leftovers. In an ideal world if AMD had billions of dollars and Intel was also making GPUs I bet we would see a pure gaming GPU from nvidia today instead of this thing which I honestly find disappointing.
 
I don't expect ray-tracing to be ubiquitous like AA and AO are today, but the hardware and driver software is there now and it works- and that's what we need to get started!

The real question is going to be - how does the RT on these cards hold up by the time it really starts to show up in games? Will be hard to tell until we see what future implementations can push, but if the 2080ti is at 10 gigarays, the 2080 at 8 and the 2070 at 6... at what point would they need to stop decreasing the RT capability for it to continue to be a useful feature?

For instance, if a 2060 launches at 4 gigarays, and a 2050 at 2 gigarays, where does that put developers as far as having a baseline to target for specs? When does it drop to a point where it can no longer properly keep up? Some articles put the 1080ti capable of calculating about 1.21 gigarays per second (which is a little suspect as it makes me want to post a Doc Brown gif), and I assume that's at the expense of other rendering capabilities, but if they want it to be a feature that gets a lot of uptake (and if developers want to save the work of doing lighting "the old way"), they wouldn't want to set the baseline so low as to discourage use of it on mid-range parts, where the real sales volume will be.

I'm hoping they set a baseline for any mainstream cards and make it the "minimum RT capability" of RTX 20 series (assuming there are any more cards in the lineup, and they don't just wait until 7nm for a full top-to-bottom refresh). That would help to assure developers that they're implementing a feature that may get some actual uptake relatively quickly, instead of having to wait ~5 years for fast enough hardware RT to trickle down to mainstream parts.

(or, maybe I just misunderstood the whole thing, and decreasing the gigarays just decreases the fidelity of the lighting, not necessarily affecting overall performance?)
 
Do we know Turing's RTX hardware works with MS's API? Or how AMD's hardware handles the same? AMD will have Vega/Navi in whatever ends up in the next Playstation/Xbox in a year or two (tops, I think), so that would be a big install base for cross platform titles. There have been a bunch of titles getting more feature rich PC versions lately.
 
I'll hold onto my 1070 and Rx 580s.

I'm not paying for their projected crypto currency losses

DlS6KplWwAM0Pn7.jpg
 
same, 1080 in my laptop plays everything just fine. once the games coming out get going ill decide what i need then if what i have starts to show its age.
 
With preorders, you run a cost-benefit analysis and decide if it's worth placing a preorder based on the information you have and information you'll get before it ships, as well as the cancellation and return policy of the vendor. Blanket statements like "never preorder" are as useful as "always preorder" in they ignore a whole spectrum of variables.

With any hot hardware product, unless you sit there and refresh like a madman and preorder immediately at announcement, you may be looking at 2-3 months before you can get the thing at MSRP. That's what happened with the 10-series, launched end of May 2016 and the 1080 was not available at MSRP until end of August. Similar thing with Google's Pixel phones and many other products. PC hardware is like hyperinflation, there is value in that 3 months. At the very least, if you can cancel and your credit card doesn't get charged, why would you not preorder...? You can even time your preorder to get the second batch after all the reviews have come out and then make your decision (tho I'm sure we'll get plenty of leaks before Sept 20th).
 
Yeah, I pre-ordered two Tis mainly because I wanted them, but also to do a video. I was one of the first with Vega 64 Crossfire and that video got 30,000 views on YouTube. Worth it for me.
 
Can I say, one of the things I'm genuinely excited about is NVLink. It enables a 100 GB/s two-way interconnect between the cards, which is 50x higher than SLI. This allows the cards to use a shared memory buffer, which means they can be treated as a single graphics card by the driver. We should in theory see almost perfect scaling in every game, however, NVLink still has a fraction of the local GDDR6 bus bandwidth (616 GB/s for the 2080 Ti) and 1/3rd the bandwidth of the commercial Quadro NVLink (300 GB/s). But it should still offer significant scaling and more importantly, uniform scaling across games that use similar amounts of memory.

I think this may make NVLink a decent upgrade path option. Imagine for the sake of argument that Maxwell had NVLink. Let's say you bought a 980 Ti in the summer of 2015. When the 10-series cards came out, you still had the performance of a 1070, so you decided to skip that generation. Now the 20-series are coming out and your 980 Ti is getting old. Rather than dishing out $700 for a 2080 or $1200 for a 2080 Ti, you can just spend $250 on another 980 Ti and with 85% scaling you have the performance of a 1080 Ti, so you're good for another generation. Clearly we don't have the numbers yet, but if it scales 80-90% in everything, I think that will bring dual-GPU setups back from the dead.
 
Last edited:
Can I say, one of the things I'm genuinely excited about is NVLink. It enables a 100 GB/s two-way interconnect between the cards, which is 50x higher than SLI. This allows the cards to use a shared memory buffer, which means they can be treated as a single graphics card by the driver. We should in theory see almost perfect scaling in every game, however, NVLink still has a fraction of the local GDDR6 bus bandwidth (616 GB/s for the 2080 Ti) and 1/3rd the bandwidth of the commercial Quadro NVLink (300 GB/s). But it should still offer significant scaling and more importantly, uniform scaling across games.

I think this may make NVLink a decent upgrade path option. Imagine for the sake of argument that Maxwell had NVLink. Let's say you bought a 980 Ti in the summer of 2015. When the 10-series cards came out, you still had the performance of a 1070, so you decided to skip that generation. Now the 20-series are coming out and your 980 Ti is getting old. Rather than dishing out $700 for a 2080 or $1200 for a 2080 Ti, you can just spend $250 on another 980 Ti and with 85% scaling you have the performance of a 1080 Ti, so you're good for another generation. Clearly we don't have the numbers yet, but if it scales 80-90% in everything, I think that will bring dual-GPU setups back from the dead.
You are living in a fantasy land.
 
The Nvidia rep said that SLI works the same with NVLink. There may be less microstuttering because if the added bandwidth, but that's about it.
 
https://m.hardocp.com/news/2018/08/...ting_gives_interview_to_hothardware_on_turing
There's probably timestamps in the comments. It was towards the end of the interview.

https://hothardware.com/news/geforce-rtx-turing-nvidia-tom-petersen
They list the questions here.

Thanks for that, certainly corrected a few of my impressions of NVLink.

I think there's a bit of confusion here, because "SLI" is going to remain the branding, but the actual NVLink interface is very different from traditional SLI. SLI HB only has 2 GB/s bandwidth, NVLink on GeForce has 100 GB/s. That is a massive difference that allows the two cards to cross-reference each other's memory and work as one a lot more efficiently. Conversely, SLI uses tricks like AFR to render independently on the cards, then uses the link as a pass-thru to the monitor output. Further communication is carried out over PCIe, which has much less bandwidth than NVLink after everything else, and is noisy and laggy.

I thought they would implement NVLink at the driver level and games would simply see the 2 cards as a single virtual graphics card, but it seems like they're not doing that yet. Instead, they'll focus on optimizing AFR and other legacy SLI implementations in the immediate future. But he did say that NVLink's features would be accessible thru the DX APIs, so we can expect developers to start taking advantage of the new platform soon. The new APIs should allow for easier adoption, so you can expect a lot more games to support scaling and better scaling. That's really the benefit, there are already games that scale at 80% with SLI, but some games scale a lot less, some don't scale at all, and some scale negatively. NVLink fixes that if implemented properly. You can expect higher scaling and much less variation in scaling across games. Seems like he hinted in the future they'll consider a more universal implementation, perhaps at the driver level, but bandwidth at the moment is not sufficient for that -- so maybe when the 300 GB/s NVLink trickles down to consumer GPUs.

Anyway, all speculation at this point, I hope we'll see some good initial results.
 
Last edited:
8K? Please, no. We are pushing the limits on running games in 4K 60fps as it is, even a lot of UHD Blu-Rays are upscaled and don't even look that nice. 8K is premature.

Absolutely. We can't even run 4k at ultra settings in a lot of the latest games. Please just concentrate on adding things that make games more immersive. A GPU to run at 8K at high settings is probably 5 years away. Just give us more things like HDR and Ray Tracing.
 
Back
Top