SMAA's temporal aliasing can be pretty terrible, though. TAA was meant to combat that. The only significant roadblock in DLSS is the fact that execution is subjective based on the Neural Net: you have to train it (meaning feed it a lot of 'perfect' renders: time and money), and it will try to...
NV's CEO has said in a recent interview that 1080 Ti, 1080, 1070 Ti, and 1070 stock has finally dried up, with 1060 finally depleting soon, so Supply & Demand is literally the case here now.
Only if all the games you care about support it.
Otherwise, no, not acceptable. Actually, screw that, it's not acceptable at all since support is already very small, even after DX12 was introduced.
The RTX 2070 is more or less your laptop's GTX 1080 with RT/DLSS, so if you get that, it will be just like your laptop's performance + a little higher due to better temps resulting in higher clocks + more OC headroom.
The RTX 2080 is a 1080ti, which allows for many games to go 4k60 (or really...
Okay, since AMD just announced Radeon VII on 7nm with performance around 1080ti for $699, what does that mean for Navi? Navi will be 7nm too, and while a new arch getting perf gains is expected, I can't imagine possibly doubling shader power without another shrink.
So that means GTX1080 perf at...
I don't think you're getting what I'm saying; I'm talking about the Titan RTX having *less* Tensor Cores than Titan V. If you were marketing the Titan RTX towards folks who need more/better Tensor Cores, then why advertise it with less than the previous generation instead of touting more and...
I don't buy it. If they really were just Tensor Cores that could operate at lower precisions for speed, shouldn't you market them as better Tensor Cores, since, as you say, they have high-paying customers who are buying GPUs for more/better Tensors?
Instead, they market it (even the Titan RTX)...
Dunno about that. Activating DX12 in BFV already slams frames down both for NV and AMD, so there's obviously some trouble there, while Sniper Elite 4 gets gains for both.
I guess it really depends on the implementation.
So how does that explain the discrepancy between Titan V's RT performance (when it has the greater number of Tensor Cores and Shader units) against both the 2080 ti and the Titan RTX? ~15% more Boost Clock (with about ~10% less resources) results in ~30% more performance in favor of the RTXs...
Read Microsoft's DXR programming guideline, and you'll understand why it works on Titan Vs. Tensor Cores do work for the denoising step of the RT process, though. The RT cores are more or less fixed-function (at least as they are known right now), so they don't do anything else other than...
It really shouldn't be surprising as BFV's RT started with Titan Vs (DICE said so themselves). The first few BFV RT demonstrations were actually run on normal shaders, and was later accelerated via RT Cores, so the Titan V running it is normal.
All things considered, the performance drop in just the change of API is a real deal-breaker already at 4K. I imagine some additional optimization can still be had by just improving the DX12 renderer, which could in turn make RT more palatable. That 17fps on 1440p gets real damn close to 60fps...
Not much. Slightly higher boost clocks, but it's not enough to make a big deal out of if you're not overclocking anyway - no real world difference. Best get the quietest version you can get.
Microsoft's blog post on MSDN about DXR indicates the tech is a fundamentally a compute load, thus technically doesn't require any extra hardware or GPU engines to run. They even went as far as encouraging developers to pick it up and use it with any in-market GPUs to see what they could do with...
DX12 didn't magically appear out of nowhere. DX12 has been in the works for a while, and AMD wanted to get the jump, so they released their own version of it and called it 'open source'. Intel and NV definitely wouldn't support it since not only is it hardware-specific, but DX12 development was...
You've gotten the idea in reverse. Developers have always wanted more because their development PCs could do more. However, it's not always economically feasible to design and manufacture a console with the best hardware - thus the XB1 and PS4, which are now extremely close to normal PC hardware...
I would like to point out that RTX (at least the RT/Denoising part) runs on top of DXR, which is an extension of DX12. All DX12-capable hardware can run it, since Microsoft says it aims to be a purely compute extension. RTX's advantage is that they have dedicated hardware for the heavier...
It's sort of already there. Incoming RTX RT games are supposed to run on top of DXR, which is an extension of DX12, meaning all of DX12-capable hardware should be able to run it, as it is aimed as a purely compute workload. The performance hit of not having dedicated hardware is a completely...
Bigshrimp
This is wise.
With the glimpses of the theoretical performance penalty/uplift of all the RTX technologies combined, it could prove a massive leap forward, once titles start showing up. Not really an 'if' but a 'when', but if you have last gen's midrange and above cards...
Well, even if you clocked them the same, you'd end up with base 1080 FE speeds, which is *still* double-digit gains (25-40%) over a 1070. So...
EDIT: Oh, sorry, correction, even more, because a 1080 FE is already that much, but with the theoretical difference between archs displayed in this...
Huh? Check it again - both the 1080 and the 2070 were clocked at similar speeds (check clock speed consistency). Clock-for-clock, it's still better considering a deficit in cores and other resources.
Not quite. Check the GPU Clock Speed Consistency section of the review: both cards are AIB custom overclocked models, and boost to similar clock speeds. ~30mhz isn't likely to create a 8-16% performance gap, especially since the 1080 has more cores against the 2070.
I understood what you meant, but 2 *full* 4k feeds for each eye is not up to the GPU makers, but is instead up to the HMD ones. If your HMD only supports a single large input cable but twin displays with independent resolution, then VR SLI already caters to one GPU per eye, regardless of...
VR SLI exists, and it does exactly what you're stating. I imagine 2x 2080 Ti 4K@90 will probably be pretty spectacular if some dev used all VRWorks techniques to their full potential.
Generally speaking, they have a neural network learn what an image looks like under the absolute highest quality (64x AA, same as movie quality), then have the Tensor Cores perform real-time inference to reconstruct the image to look like the ground truth render.
It can be done to render at a...
I've owned 200+W GPUs, and 150W ones, and in a room with poor cooling and ventilation, it makes a difference, not in performance but for the people in the same room when it's running for hours on end full-bore.
But yeah, I've gravitating towards the larger GPU and thinking of downclock/power...
Not really last-gen VS new-gen, but like x80Ti (which is usually a 250W GPU) vs a x70 (usually 140-160W).
If the massive array of the x80Ti will be mostly at low power (say, ~30% power) vs a x70 which will be running at high power (70-90% power) but both providing me my performance target...
While I will definitely use the best I can afford, I try to stay away from 250+W GPUs (room temps are already hot enough as it is now) and I'm gaming mainly on 1080p60. I do plans of getting 1080p144 or 1440p60+ in the somewhat near future, but I like to target performance with my purchases and...
So I'm currently on the fence of getting an upgrade to my aging GTX 970, which has served me many years of great 1080p gaming.
Games nowadays are getting more demanding (especially for VRAM), and I'm looking into upgrading. I'll definitely be waiting for NV's 11-series, but I had a question...
Didn't know it was 64. I guess that explains why my Skyrim at 65-70fps (with all manner of mods + heavy ENB), the game only manages to screw up ever so slightly.
I like the methodology used - comparing much earlier drivers to see the progression of performance improvement. This kinda validates the "Launch vs Current Driver Perf" slide they presented.
From 368.25 vs 378.78
- Hitman numbers almost line up to 23% (22.55% by my calculation).
- GoW4 got 6%...
The official slides and comparisons being made were all from initial Game Release VS this driver. But, as with many things picked up by the online media, many missed that part, and that the comparison made was VS the last driver released. The figures have always been stated as the cumulative...
With the 980/970, the GPU doesn't have full HW acceleration - only mixed acceleration. The 950/960 and all Pascals have full HW support for HEVC.
Using MPC, you need to set LAV Video's decoder to DXVA2 (Copy-Back) to take advantage of this acceleration; CUVID won't work.