AMD RDNA 2 gets ray tracing

Is this hardware or software ray-tracing? Keep in mind that even Nvidia has already enabled Ray-tracing on their older cards like the GTX 1080, but it's done in software. Only the RTX series has actual dedicated ray-tracing hardware on it. From what I've read, the ray-tracing in DX12 Ultimate WILL be able to make use of the dedicated ray-tracing hardware in Nvidia RTX cards.

Either way, it's great that the new AMD cards will support ray-tracing, but will it support it like the Nvidia RTX cards do (in hardware), or will it "support" it like the Nvidia GTX cards do (in software)?
 
ya
GotNoRice
i was wondering the exact same things and will there be any difference in speed software vs hardware etc.........
 
MUCH better link....straight from Microsoft:

https://devblogs.microsoft.com/directx/announcing-directx-12-ultimate/

DirectX Raytracing 1.1
DirectX Raytracing (DXR) brings a new level of graphics realism to video games, previously only achievable in the movie industry. The effects achievable by DXR feel more real, because in a sense they are more real: DXR traces paths of light with true-to-life physics calculations, which is a far more accurate simulation than the heuristics based calculations used previously.

We’ve already seen an unprecedented level of visual quality from titles that use DXR 1.0 since we unveiled it, and built DXR 1.1 in response to developer feedback, giving them even more tools with which to utilize DXR.

DXR 1.1 is an incremental addition over the top of DXR 1.0, adding three major new capabilities:

  • GPU Work Creation now allows Raytracing. This enables shaders on the GPU to invoke raytracing without an intervening round-trip back to the CPU. This ability is useful for adaptive raytracing scenarios like shader-based culling / sorting / classification / refinement. Basically, scenarios that prepare raytracing work on the GPU and then immediately spawn it.
  • Streaming engines can more efficiently load new raytracing shaders as needed when the player moves around the world and new objects become visible.
  • Inline raytracing is an alternative form of raytracing that gives developers the option to drive more of the raytracing process, as opposed to handling work scheduling entirely to the system (dynamic-shading). It is available in any shader stage, including compute shaders, pixel shaders etc. Both the dynamic-shading and inline forms of raytracing use the same opaque acceleration structures.
When to use inline raytracing
Inline raytracing can be useful for many reasons:

  • Perhaps the developer knows their scenario is simple enough that the overhead of dynamic shader scheduling is not worthwhile. For example, a well constrained way of calculating shadows.
  • It could be convenient/efficient to query an acceleration structure from a shader that doesn’t support dynamic-shader-based rays. Like a compute shader or pixel shader.
  • It might be helpful to combine dynamic-shader-based raytracing with the inline form. Some raytracing shader stages, like intersection shaders and any hit shaders, don’t even support tracing rays via dynamic-shader-based raytracing. But the inline form is available everywhere.
  • Another combination is to switch to the inline form for simple recursive rays. This enables the app to declare there is no recursion for the underlying raytracing pipeline, given inline raytracing is handling recursive rays. The simpler dynamic scheduling burden on the system can yield better efficiency.
Scenarios with many complex shaders will run better with dynamic-shader-based raytracing, as opposed to using massive inline raytracing uber-shaders. Meanwhile, scenarios that have a minimal shading complexity and/or very few shaders will run better with inline raytracing.

If the above all seems quite complicated, well, it is! The high-level takeaway is that both the new inline raytracing and the original dynamic-shader-based raytracing are valuable for different purposes. As of DXR 1.1, developers not only have the choice of either approach, but can even combine them both within a single renderer. Hybrid approaches are aided by the fact that both flavors of DXR raytracing share the same acceleration structure format, and are driven by the same underlying traversal state machine.

Best of all, gamers with DX12 Ultimate hardware can be assured that no matter what kind of Raytracing solution the developer chooses to use, they will have a great experience.

Now say after me:

D X R

DirectX RayTracing.

Raytracing is DXR...
 
so the new dx 12 ultimate will it be able to fully support what new card are out now 5700 2070 2060 2080's or are we going to have to upgrade to get the shiny dx 12 sticker to tell us were now ready to game on?
 
so the new dx 12 ultimate will it be able to fully support what new card are out now 5700 2070 2060 2080's or are we going to have to upgrade to get the shiny dx 12 sticker to tell us were now ready to game on?

Read and learn - down at the bottom clicking "Supported GPUs"

https://www.nvidia.com/en-us/geforce/technologies/directx-12-ultimate/

Of course that's just on the nVidia end. I don't believe any of the current AMD cards would support it but someone can correct if wrong.
 
From what I understand, it requires RDNA2 architecture, so no current cards would work (and they would probably be too low anyhow with a software based approach).
 
It might be just my observation, but I have the feeling that Nvidia may have indirectly helped AMD achieve their real-time hybrid raytracing implementation through Microsoft with NV and MS' collaboration on DXR 1.0 and MS' decision to get that framework into their next XBox console since MS defined the requirements for AMD to follow in developing the custom APU for the new console. Maybe Nvidia was willing to let this happen just to make sure their R&D and transistor/die space investments in Turing (RTX, Mesh Shaders, VRS) gets wide spread adoption so these technologies stay relevant as baseline features for the next gen consoles and GPUs from AMD and Intel to have similar implementations via common API specs/requirements as they were introduced in Turing instead of having those features fade into irrelevance like some of Nvidia's other technological investments (PhysX, Simultaneous Multi-Projection, etc.). With DX12 Ultimate and Vulkan Ray Tracing, Nvidia has pretty much had a major hand in defining how hybrid real-time ray tracing is done in the industry. According to the Digital Foundry videos featuring the XBox Series X, AMD's RDNA2 will also have their own "RT cores" to handle BVH in hardware. I do wonder how de-noising will be handled by each vendor going forward. Turing can possibly use their tensor cores while RDNA2 might be able to do them using shaders.
 
It might be just my observation, but I have the feeling that Nvidia may have indirectly helped AMD achieve their real-time hybrid raytracing implementation through Microsoft with NV and MS' collaboration on DXR 1.0 and MS' decision to get that framework into their next XBox console since MS defined the requirements for AMD to follow in developing the custom APU for the new console.

There’s no intellectual capital in DXR. Its concepts have been common knowledge in the graphics industry for decades. The secret sauce is in the hardware implementation and nvidia certainly didn’t share that with AMD.

And of course it’s in everyone’s best interest to define a common api for raytracing (DXR/Vulkan RT). Just like for any other graphics feature that you want developers to use.

I do wonder how de-noising will be handled by each vendor going forward. Turing can possibly use their tensor cores while RDNA2 might be able to do them using shaders.

No games use tensors for denoising on Turing. RDNA will denoise just fine.
 
There’s no intellectual capital in DXR. Its concepts have been common knowledge in the graphics industry for decades. The secret sauce is in the hardware implementation and nvidia certainly didn’t share that with AMD.

I made no such assertion. I just said the framework for the hardware implementation of DXR as well as VRS and Mesh Shading is very much based on Microsoft and Nvidia's work on them which naturally is based on the hardware capabilities that Nvidia has done with Turing. I never implied that Nvidia directly shared any proprietary technology with AMD. The DXR 1.0 API spec was developed by MS and NV after which MS used the resulting requirements in separately developing the custom GPU in the Xbox One Series X with AMD.


And of course it’s in everyone’s best interest to define a common api for raytracing (DXR/Vulkan RT). Just like for any other graphics feature that you want developers to use.

I totally agree, but my point is, it's a win-win for AMD, MS and Nvidia. AMD gets hybrid RTRT sooner than later, Nvidia gets broader developer support for their technologies especially RTX, thus their cards stay relevant in the midst of the AMD-powered next-gen consoles, and MS achieves a more unified platform in the Xbox and Windows gaming PCs.

No games use tensors for denoising on Turing. RDNA will denoise just fine.

I'm aware of that which is why I specifically said "going forward" and "possibly".
 
Last edited:
I made no such assertion. I just said the framework for the hardware implementation of DXR as well as VRS and Mesh Shading is very much based on Microsoft and Nvidia's work on them which naturally is based on the hardware capabilities that Nvidia has done with Turing. I never implied that Nvidia directly shared any proprietary technology with AMD. The DXR 1.0 API spec was developed by MS and NV after which MS used the resulting requirements in separately developing the custom GPU in the Xbox One Series X with AMD.

Yeah I get what you’re saying. I just don’t think that DXR itself was particularly helpful in designing hardware.

I’m sure certain hardware optimizations were possible given the constraints imposed by the DXR api but the core problems to be solved - ray/box intersection and ray/triangle intersection have been around long before DXR.
 
I made no such assertion. I just said the framework for the hardware implementation of DXR as well as VRS and Mesh Shading is very much based on Microsoft and Nvidia's work on them which naturally is based on the hardware capabilities that Nvidia has done with Turing. I never implied that Nvidia directly shared any proprietary technology with AMD. The DXR 1.0 API spec was developed by MS and NV after which MS used the resulting requirements in separately developing the custom GPU in the Xbox One Series X with AMD.

This isn't correct not even close. All DX12 cards support DXR. The difference lies in the implementation. Nvidia contributed no more to the API than AMD did.

That's why I've said multiple times that's it's really disingenuous to say Nvidia developed Ray tracing in games it's not true at all. Imagination has done more to put tray tracing in games than even Nvidia.

What you can say though it's that Nvidia is the only one that ships hardware today that had DXR enabled. That's true. But more than that and you're getting into areas where it's just not accurate.
 
This isn't correct not even close. All DX12 cards support DXR. The difference lies in the implementation. Nvidia contributed no more to the API than AMD did.

What you can say though it's that Nvidia is the only one that ships hardware today that had DXR enabled. That's true. But more than that and you're getting into areas where it's just not accurate.

If you're just talking about Direct X 12 as it was released back in 2015, you are correct, but I was referring specifically to DX12 Ultimate, where the four new key features specifically require back-end hardware support that is only available at the moment on RTX Turing GPUs available since 2018. It's blatantly obvious at this point that Nvidia contributed more and for much longer to DX12 Ultimate seeing that it's their hardware that's being fully supported first and it's in their best interest to do so. Coincidence? I think not. Please don't ignore that MS and NV were the main collaborators in developing the DXR 1.0 API in Windows that leverages RTX hardware and has actually been used in games that support it. Not saying that AMD didn't contribute anything to DX12 Ultimate but at best, AMD's contributions to new API stems more from their work with MS on the XBox One Series X which looks to me is still based on what RTX already has (RT Cores, Mesh shading, VRS, Sampler Feedback). Remember, AMD and Intel eventually had to announce their future support of raytracing in their roadmaps after RTX came out. Not before.

That's why I've said multiple times that's it's really disingenuous to say Nvidia developed Ray tracing in games it's not true at all. Imagination has done more to put tray tracing in games than even Nvidia.

What you can say though it's that Nvidia is the only one that ships hardware today that had DXR enabled. That's true. But more than that and you're getting into areas where it's just not accurate.

I didn't say what you think I said because I was clearly talking about the DX12 Ultimate API which is based on RTX Turing hardware capabilities, not "who invented ray tracing in games". It's Nvidia's own Hardware-Accelerated Real-Time Hybrid Ray Tracing for PC games along with Mesh Shading and Variable Rate Shading that were adopted in Direct X 12 Ultimate as the basic underlying hardware feature set design and those were evidently established on the RTX Turing feature set back in 2018. Surely it wasn't just a coincidence to have Turing to support four out of four DX12 Ultimate features on the outset? AMD and Intel will have to release similar hardware implementations that support these newly announced DX12 features in due time.
 
Last edited:
I mean ray tracing in software has existed for decades, for example used for movie graphics.

Nvidia didn't invent ray tracing, what they did was make it viable in real-time for the general public, more so than any other company in recent times.

DXR followed from Nvidia proving RTX worked. MS would never have made a theoretical API for hardware that didn't exist. Nvidia has everything to do with DirectX and Vulkan now adopting ray tracing.
 
Got it... So anyone who has, or will be, buying a Navi based card, in which the architecture is less than 9 months old, can't make use of the new DX 12 Ultimate Features... Nice (y)
 
gamma xt.jpg
 
GigaRays is not a industry wide standard, it is the way NVIDIA presents their RT perfomance :LOL:
 
how quickly will RDNA2 arrive in the mid-range and low-end?

i.e. how quickly will we see the 5500 series replaced with something that has HDMI 2.1 and AV1 decode..
 
how quickly will RDNA2 arrive in the mid-range and low-end?

i.e. how quickly will we see the 5500 series replaced with something that has HDMI 2.1 and AV1 decode..

Not very fast as you are part of a niche of niche...it is business...profit dictates.
 
how quickly will RDNA2 arrive in the mid-range and low-end?

i.e. how quickly will we see the 5500 series replaced with something that has HDMI 2.1 and AV1 decode..
Xbox Series X and PS5 :D

Will be interesting to see what video support both will have. XBox Series X and PS5 is supposed to have Optical drives:
https://www.windowscentral.com/will-xbox-series-x-play-discs
https://www.trustedreviews.com/news/does-the-ps5-have-a-disc-drive-4017404

Will pretty much obsolete my HTPC in virtually everything come to think about it.
 
Finally figured out why the demo looked so off and not impressive besides the rather cruddy frame rate. While the demo shows different types of reflections, curved surfaces, flat, uneven, rough etc. None really looked that great compared to typical raytraced images. THE BIGGEST thing I now notice was there is only one level of reflections. Objects that are reflecting, virtually everything, when seen on a reflective surface have no reflection in them -> reflection of a mirror of a mirror in other words would be blank and not showing each other reflections. Obviously in this demo you would have to have virtually infinite number of rays to calculate reflections of reflections (typical raytracing you would see this to a good level), at least 2 levels of reflections would probably have made this demo looked better.
 
This happened in Control with Nvidia. There is a mirror room (mirrors on both sides) but only 1 bounce for reflection which looks really fake.
 
  • Like
Reactions: noko
like this
Finally figured out why the demo looked so off and not impressive besides the rather cruddy frame rate. While the demo shows different types of reflections, curved surfaces, flat, uneven, rough etc. None really looked that great compared to typical raytraced images. THE BIGGEST thing I now notice was there is only one level of reflections. Objects that are reflecting, virtually everything, when seen on a reflective surface have no reflection in them -> reflection of a mirror of a mirror in other words would be blank and not showing each other reflections. Obviously in this demo you would have to have virtually infinite number of rays to calculate reflections of reflections (typical raytracing you would see this to a good level), at least 2 levels of reflections would probably have made this demo looked better.
In PS3/X360 generation we had games with 'next gen' visuals where everything was made out of bump mapping.
In PS4/XO generation we have games with 'next gen' visuals where everything is made out of SSAO.
In PS5/XSX generation we will have 'next gen' visuals where everything will be made out of mirrors...

...and in a sense RT will truly help drive development costs down eg. no need to spend money on artists making textures :ROFLMAO:

(my impression from this RDNA2 demo)
 
  • Like
Reactions: noko
like this
I think the next Gen consoles is what going to help save RTG in AMD, RTX would stick if developers didnt had a alternative.. but since they will and they will develop for the RDNA tech, it should run pretty well on AMD whatever Raytracing the next gen games have.
The only thing is if nvidia decides to throw some marketing $ to be optimized for the tensor cores...
 
I think the next Gen consoles is what going to help save RTG in AMD, RTX would stick if developers didnt had a alternative.. but since they will and they will develop for the RDNA tech, it should run pretty well on AMD whatever Raytracing the next gen games have.
The only thing is if nvidia decides to throw some marketing $ to be optimized for the tensor cores...

That would be a great waste of marketing dollars since RTX raytracing has nothing to do with tensor cores.
 
Right, well Nvidia probably spends more in marketing than AMD spends on the whole R&D of their cards. And "it just works".
 
I think the next Gen consoles is what going to help save RTG in AMD, RTX would stick if developers didnt had a alternative.. but since they will and they will develop for the RDNA tech, it should run pretty well on AMD whatever Raytracing the next gen games have.
The only thing is if nvidia decides to throw some marketing $ to be optimized for the tensor cores...
What do you mean "develop for RDNA tech"?
What are the killer performance improving features of RDNA2 which Turing does not already have? I highly doubt DX12 "Ultimate" would be released just before cards were released with even more features that needed extensions...

Besides, PS4 and XO had GCN GPU's with such killer feature: very efficient asynchronous compute capabilities. Nvidia cards did not properly support it for a long time and it still made no real difference so...

I really hope ray tracing performance of RDNA2 does not suck and is actually much better than Turing. If not then it will just hinder ray tracing implementation in games and it will be used to a lesser degree. Maybe more extremely optimized but as we already saw with "dynamically loaded textures" bullshit from PS3/X360 era these extreme optimization techniques do not always translate well in to PC world and definitely do not contribute to overall quality of games.
 
What do you mean "develop for RDNA tech"?
What are the killer performance improving features of RDNA2 which Turing does not already have? I highly doubt DX12 "Ultimate" would be released just before cards were released with even more features that needed extensions...

Besides, PS4 and XO had GCN GPU's with such killer feature: very efficient asynchronous compute capabilities. Nvidia cards did not properly support it for a long time and it still made no real difference so...

I really hope ray tracing performance of RDNA2 does not suck and is actually much better than Turing. If not then it will just hinder ray tracing implementation in games and it will be used to a lesser degree. Maybe more extremely optimized but as we already saw with "dynamically loaded textures" bullshit from PS3/X360 era these extreme optimization techniques do not always translate well in to PC world and definitely do not contribute to overall quality of games.
Yes, performance much better than Turing since Turing has pretty much sucked with RT gaming from a performance prospective, lack of real innovative games or games where one would say -> I want that!.

Minimum for RT should be like 1440p 100fps+ with max or near max settings and RT, 4K 60fps+. Preferrably 100fps+ at 4K but even normal games can't even achieve that for the most part with even a 2080Ti except maybe Doom Eternal and older games.
 
Yes, performance much better than Turing since Turing has pretty much sucked with RT gaming from a performance prospective, lack of real innovative games or games where one would say -> I want that!.

Minimum for RT should be like 1440p 100fps+ with max or near max settings and RT, 4K 60fps+. Preferrably 100fps+ at 4K but even normal games can't even achieve that for the most part with even a 2080Ti except maybe Doom Eternal and older games.

And you want it for $199 and include world peace too right?
 
Back
Top