Overhauled NVIDIA RTX 50 "Blackwell" GPUs reportedly up to 2.6x faster vs RTX 40 cards courtesy of revised Streaming Multiprocessors and 3 GHz+ clock

I hope the next gen every player is a major leap.

We all know though Nvidia is going to be selling 5070s with 12GBs and maybe even drop a 8GB version.... or a 16GB version with a 64bit bus. lol
 
This one?
View attachment 563476

Although now that I look at it a second time realize it's not just a shadow problem, it's also a reflection problem. For example, specular reflection of puddle directly under the front tire should be very dark (because the reflected tire is dark). Still... it is also a shadow problem too.
Ah yeah, you found it. I couldn't forget just how bad the non RT shot looked when it really shouldn't. They even have the headlights on for RT lol.
 
I just can't get over the irony of on one hand pushing RT for more accurate/realistic lighting and on the other hand pushing frame-generation to "infer" what the scene should look like to maintain any semblance of high framerates.
Not ironic at all. They're complementary, and without the combination there would only be more bickering about how unplayable and useless RT is.

Fun fact, if not totally related, but how we see to begin with is based on an "inference gimmick". At the back center of each eyeball where the optic nerve connects, there's a blindspot with no photoreceptors. But we don't see a tiny black hole in the center of vision when look out at the world, because the brain is amazing and fills in the missing information by inferring what should be there based on the surrounding information. There are easy tests to see this in action.

So how and what we see isn't even 1:1 reality. But then reality isn't 1:1 reality but a projection, and whole 'nuther topic.
 
Last edited:
Stopgap may be the more accurate term for combining RT with AI/DL frame-generation. And it would only be absurd if they did NOT enable the combination of these two techs so that RT can be used at acceptable framerates and enjoyed on a sliding-scale, rather than RT being an all-or-nothing toggle to 15FPS.

Zooming out for a moment, different things are happening with the weird bickering about Nvidia heavily pursuing and investing into RT to begin with. A lot of it just seems to stem from unrelated resentment of Nvidia's market position, or price of GPUs, or whatever personal axe to grind some people have.
Yep, bizarre to say the least - although the opposite stance of rabid fanboys/defenders/shills is also prevalent.
RT has ALWAYS been the holy grail for computer graphics, stretching back to 1968, and even further to the 1600s in physics and conceptually. My sorry GenX ass first played with raytraced single scenes in 1990 when it took days to render a single scene and was amazing for the time.
It took me a bit longer to get into the graphics side of things, but I did write a simple CPU ray/path-trace renderer (not realtime of course) in the 2010s for fun.
Some GenZ and GenAlpha gamers seem to believe Nvidia invented RT, and simply to make GPUs more expensive. This isn't what's happening.
I've noticed this too - people nowadays are way too enthralled by whatever consumer marketing hits their dopamine release trigger.
 
And you cut out the rest of my reply giving possible explanations. I know the limit for lights in DX10 and prior was 7 point lights and 1 global light. I have not worked with DX11 and DX12, so I don't know if that has changed. That said, I would like to see what you're referring to.

Fixed function lights haven't been a thing in literal decades. The API and your GPU have no idea what a light is at this point. As far as it's concerned, it's just crunching some numbers.

Shadows get expensive in raster because you're literally redrawing the geometry. Modern games are probably drawing over a million triangles of shadows in some cases. Point lights are even worse be shadow casting because this needs to happen 6 times to be omnidirectional. Muzzle flashes casting shadows are rare because horrible performance wise.
 
I've noticed this too - people nowadays are way too enthralled by whatever consumer marketing hits their dopamine release trigger.
I think in this case it is more of a team loyalty/sour grapes thing. At the moment, nVidia crushes AMD at real-time raytracing. While it is still a LOOONG way from where we'd like it to be, nVidia has chosen to put more hardware on their GPUs that help it run faster than AMD has. Fair enough, different companies have different priorities and it could well change next generation, AMD may decide they want to up their RT hardware a lot.

However, this causes some sour grapes for the people who view AMD as their "team" rather than just a tool they bought. They see the shiny RT footage, feel jealous, and thus decide to hate on it and hate on nVidia for it. They want it, but aren't willing to admit they want it since their team isn't doing it as well, so they instead hate on it.

If AMD were to suddenly leap ahead I guarantee they'd do a complete 180 and suddenly RT would be the most important thing in gaming in the past 20 years, something everyone should have/want, etc, etc.

Fanboys gonna fanboy.

As a practical matter, I expect things will slowly move towards real-time RT not only because it looks good but because with a fully path traced engine, it simplifies many things in art design which of course frees up time for other things. Going to be quite a while before that happens, as even the high end hardware still isn't powerful enough to really pull that off, but I think it will in the long run.
 
NVIDIA GeForce RTX 50 series to feature DisplayPort 2.1, using TSMC 3nm node

As per the reported Nvidia leaker, kopite7kimi, the upcoming 5000 series will feature DisplayPort 2.1 support...AMD has turned this feature on all RDNA3 desktop GPU's into a marketing win...the physical modifications for the RTX 50 series may extend further, as it is anticipated that the series will also feature a PCIe Gen5 interface and an updated 16-pin connector known as 12V-2x6...it's not clear if mid-range to entry-level models will continue to use the standard 8-pin connector...

https://videocardz.com/newz/nvidia-...o-feature-displayport-2-1-using-tsmc-3nm-node
 
DP 2.1 is quite the generic statement.

DP 2.1 UHBR10 (40 Gbps) has less bandwith than hdmi 2.1 (48 Gbps), UBH13.5 not that big of a deal more (54 Gbps).

Like the non workstation AMD card of the last gen, this could be simply UBH13.5 dp 2.1 giving only 12.5% more bandwith than the old hdmi 2.1...
 
Last edited:
Back
Top