Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Remember when people were subtitling the valkyrie tom cruise movie during the hitler outrage scene and nvidia was one someone did? Need to see if anyone did one for turing yet but no time atm
Its not a fair appraisal, it should be compared to the use of AA which is where it might look a little worse.
I will wait for 7nm cards. I still have several other upgrades that I need to do anyway that hold better value since I am barely gaming these days anyway since I have a lot of work to do. For my in between fix I will do some case modding that I left in the middle.
Next gen will be a more substantial upgrade over my 1080Ti and software support will be better.
I think nvidia is like suggesting "looky here fools..pascal is still good for whats out, heres something to mess with while we improve on Turing/Dr and move towards 7nm..but we want $$$$$ ...more than usual... because we have a lot of work left to do!! And its expensive!!"
I think the 20 series is going to show its true muscle at 4k against the 10 series, this is without the raytracing. I can also see 1440 being the new resolution where things become CPU limited due to immense power of the new graphics cards.
I think the 20 series is going to show its true muscle at 4k against the 10 series, this is without the raytracing. I can also see 1440 being the new resolution where things become CPU limited due to immense power of the new graphics cards.
It’s pretty obvious to me what nvidia did here and I’m surprised others haven’t mentioned it. Nvidia has stopped being a PC gaming company several years ago and today they design GPUs primarily for the AI/Datacenter market in mind and its where their R&D is focused as that is where they are seeing explosive growth. So naturally Volta and Turing were massive dies focused on those markets to get ahead of Intel and because PC gaming is still a big chunk of their revenue, they repurposed Turings useless silicon like the Tensor cores for stuff like DLSS which will hardly ever get used. Even the ray tracing part was meant for companies like Pixar so they could buy Quadro systems vs Intel.
Nvidia needed a way to sell their failed Quadro cores and here we are left with a gigantic die that now costs $1300 after tax for a 2080Ti with features we can’t really use or want to use based on performance metrics that have been shown. Don’t get your hopes up of AMD doing any better because they’re busy with Ryzen and they too will copy nvidias strategy because it makes money.
Yes this isn’t anything new, nvidia has traditionally used cut down pro cards but we never had to deal with extra largely useless silicon like tensor cores on a consumer GPU and be stuck paying for it.
Put it this way: would you take Turing with it’s useless tensor and rtx cores at 750+mm or a refined Pascal chip at that size without the tensor and rtx cores? I know which I’d pick.
I like it. Very cynical. The only flaw I see is the gaming volume vastly outpaces the quadro market. If it served them better it’s be no problem to have that 1/3 of the die CUDA cores rather than RTX with the volumes.
My cynical theory is nVidia is using their expertise in areas AMD and Intel can’t follow. They execute just DLSS well and the 2080ti is 2x the perf of a 1080ti... how do you compete with that? It’s kind of like AMD with Mantle but nVidia has the people/tech to do it.
I don't expect ray-tracing to be ubiquitous like AA and AO are today, but the hardware and driver software is there now and it works- and that's what we need to get started!
Jay nailed it. Get some cream if you need it for the burn.
Cool story.I'll hold onto my 1070 and Rx 580s.
I'm not paying for their projected crypto currency losses
View attachment 99203
I haven't posted as much as I used to, but this one is the Vega CF video I was talking about.Link to your channel?
You are living in a fantasy land.Can I say, one of the things I'm genuinely excited about is NVLink. It enables a 100 GB/s two-way interconnect between the cards, which is 50x higher than SLI. This allows the cards to use a shared memory buffer, which means they can be treated as a single graphics card by the driver. We should in theory see almost perfect scaling in every game, however, NVLink still has a fraction of the local GDDR6 bus bandwidth (616 GB/s for the 2080 Ti) and 1/3rd the bandwidth of the commercial Quadro NVLink (300 GB/s). But it should still offer significant scaling and more importantly, uniform scaling across games.
I think this may make NVLink a decent upgrade path option. Imagine for the sake of argument that Maxwell had NVLink. Let's say you bought a 980 Ti in the summer of 2015. When the 10-series cards came out, you still had the performance of a 1070, so you decided to skip that generation. Now the 20-series are coming out and your 980 Ti is getting old. Rather than dishing out $700 for a 2080 or $1200 for a 2080 Ti, you can just spend $250 on another 980 Ti and with 85% scaling you have the performance of a 1080 Ti, so you're good for another generation. Clearly we don't have the numbers yet, but if it scales 80-90% in everything, I think that will bring dual-GPU setups back from the dead.
The Nvidia rep said that SLI works the same with NVLink. There may be less microstuttering because if the added bandwidth, but that's about it.
https://m.hardocp.com/news/2018/08/...ting_gives_interview_to_hothardware_on_turing
There's probably timestamps in the comments. It was towards the end of the interview.
https://hothardware.com/news/geforce-rtx-turing-nvidia-tom-petersen
They list the questions here.
8K? Please, no. We are pushing the limits on running games in 4K 60fps as it is, even a lot of UHD Blu-Rays are upscaled and don't even look that nice. 8K is premature.
I'll believe it when we get some reliable third party benchmark results. Seems like marketing BS to me.
VRR isn't going to be widely available until 2019 for TVs anyway....