Forza Motorsport 6: The first true DX12 game/true test of DX12?

Zion Halcyon

2[H]4U
Joined
Dec 28, 2007
Messages
2,108
Just browsing and saw these system requirements for the open beta:

upload_2016-4-28_11-16-32.png



Read more: http://wccftech.com/forza-motorsport-6-apex-windows-10-open-beta-5-system-requirements-revealed/#ixzz478eLkOHT



This appears to be the first game NOT dependent on DX11. VERY curious to see how it performs.
 
i wish more games would give requirements like that.
 
While we're talking about DX12 games that push boundaries; anyone see the trailer for the new Deus Ex?
 
Too bad you have to have Windows 10 to play it. After all these years of people wanting forza on PC and now MS has some ploy to get more people to install Win 10.
 
Too bad you have to have Windows 10 to play it. After all these years of people wanting forza on PC and now MS has some ploy to get more people to install Win 10.

Dx 12 is win 10 only anyway, so don't know why you are bitching. It's where the industry is going
 
Dx 12 is win 10 only anyway, so don't know why you are bitching. It's where the industry is going
I understand that, but i dont see why it has to be DX12 only when plenty of games can do both. But MS knew what they were doing when they only let dx12 on Win 10, same thing when they only let vista have DX10 and left xp in the dust. Oh well, its a gimped version of forza anyways and id rather not be forced to use the windows store bs to play it.
 
I understand that, but i dont see why it has to be DX12 only when plenty of games can do both. But MS knew what they were doing when they only let dx12 on Win 10, same thing when they only let vista have DX10 and left xp in the dust. Oh well, its a gimped version of forza anyways and id rather not be forced to use the windows store bs to play it.

Experts have been saying that we won't truly see the benefits out of DX12 and all that it is capable of until companies finally start abandoning DX11. And for the sake of progress, I fully support it.
 
Experts have been saying that we won't truly see the benefits out of DX12 and all that it is capable of until companies finally start abandoning DX11. And for the sake of progress, I fully support it.

That's my understanding as well. They need to focus on DX12 to reap the benefits of it. Games that have implement both (RotTR, for example) haven't necessarily seen huge gains when using DX12, and in some cases, it's been a net loss. This is starting to remind me a little of when DX10 came out.
 
That's my understanding as well. They need to focus on DX12 to reap the benefits of it. Games that have implement both (RotTR, for example) haven't necessarily seen huge gains when using DX12, and in some cases, it's been a net loss. This is starting to remind me a little of when DX10 came out.

Yup, and people who did not take advantage of the free upgrade to Windows 10 because they were being pissy are probably the most upset about it. However it's their own damn fault.
 
Since when is RRotR considered a valid data point? It was my understanding that it was dismissed because amd cards didn't do too well with the transition to DX12.

Anyway, Zion, what makes you think this is the first 'true' dx12 game?

Quantum Break is DX12 only as well
Your thread titles make me think you're a journalist of some kind :p
 
Since when is RRotR considered a valid data point? It was my understanding that it was dismissed because amd cards didn't do too well with the transition to DX12.

Any game that uses both DX11 and 12 is a valid data point. Any game that exclusively uses DX12 is a valid data point. I don't selectively choose data points to fit my theory. I adjust my theory based on the objective data.

As of now, there are VERY few games in either category so there can't be a definitive conclusion. The few data points that we have don't tell us much. But there was a developer comment in the front page news sometime a week or two ago indicating that they really aren't going to get the most out of DX12 until they drop DX 11. Exclusively working with one API will allow for better use of that API. That's a strong data point to consider, even if it really is just a result of allocated resources and not "which API offers more performance." But, that's reality. They can't throw infinite resources at a project. Supporting two APIs with a fixed schedule and fixed manpower means less optimization than supporting just one API.
 
Any game that uses both DX11 and 12 is a valid data point. Any game that exclusively uses DX12 is a valid data point. I don't selectively choose data points to fit my theory. I adjust my theory based on the objective data.

As of now, there are VERY few games in either category so there can't be a definitive conclusion. The few data points that we have don't tell us much. But there was a developer comment in the front page news sometime a week or two ago indicating that they really aren't going to get the most out of DX12 until they drop DX 11. Exclusively working with one API will allow for better use of that API. That's a strong data point to consider, even if it really is just a result of allocated resources and not "which API offers more performance." But, that's reality. They can't throw infinite resources at a project. Supporting two APIs with a fixed schedule and fixed manpower means less optimization than supporting just one API.

Yeah this is very true, I wasn't arguing that RotTR (woops wrote RRotR earlier) shouldn't be considered, playing devil's advocate really.

I think it's not even about people abandoning DX11, it's just that DX12 needs more time/effort dedicated to it for to run on par with DX11.

Let's say it takes 2000 man hours to get a project X running on engine Y to run at 1080p60.

It will take >2000 man hours just to get DX12 performance to match that of DX11, obviously if you were cpu limited in the latter this won't apply, I'm talking strictly about GPU performance
 
Yeah this is very true, I wasn't arguing that RotTR (woops wrote RRotR earlier) shouldn't be considered, playing devil's advocate really.

No harm in that. If everyone was always in agreement there would be no need for discussion forums.

I think it's not even about people abandoning DX11, it's just that DX12 needs more time/effort dedicated to it for to run on par with DX11.

Let's say it takes 2000 man hours to get a project X running on engine Y to run at 1080p60.

It will take >2000 man hours just to get DX12 performance to match that of DX11, obviously if you were cpu limited in the latter this won't apply, I'm talking strictly about GPU performance

I believe that is speculative at this point, though a good educated guess. More features = more time to implement. if your theory holds true (I agree with it, it's just not factually proven yet), developers would have to choose between either less performance/features for broader compatibility and lower development cost (sounds like consoles), vs. higher performance/features with everything else also being the opposite. Since I'm a PC gamer and not a console gamer, I hope for the latter.
 
It's unlikely all devs will go with DX12 simply because right now there's not much of a need for it. You can either spend time adding DX12 or adding new features/optimize DX11, which has a much broader userbase.

There's really not much incentive for devs to go with DX12 just yet. Most DX12 games so far use it as a marketing tool, not much else.

Why shovel more crap for not much in return? You get much better time ROI if you stick with DX11 and use the extra time from expertise to optimise/add features.
 
All valid points. Developers must consider the dev cycle. Games releasing this year are more likely to have DX12 bolted on as an afterthought (Hitman, RotTR) because DX12 wasn't even a spec when the game's development began. Games beginning their development phase this year, unless an annual crank-it-out release will benefit from DX12 native from the start due to the likely higher uptake by the time that the game sees release in 2-4 years.

So I'm skeptical of early DX12 releases. It couldn't have been a consideration early in the development phase unless the game itself isn't very well optimized (IE, just started development within the last year).
 
I believe that is speculative at this point, though a good educated guess.

EZspqN.jpg


It's not really speculative. Developers and IHVs have made it very clear that DX12 requires a lot more effort and expertise from developers. All of the threading, resource management and memory management done by DX11 drivers now has to be handled by a DX12 application.

Here are a few good GDC 2016 presentations from both AMD and nVidia on the topic.

http://gpuopen.com/wp-content/uploa...ogramming_Model_and_Hardware_Capabilities.pdf

http://developer.download.nvidia.co...dvancedRenderingwithDirectX11andDirectX12.pdf
 
All valid points. Developers must consider the dev cycle. Games releasing this year are more likely to have DX12 bolted on as an afterthought (Hitman, RotTR) because DX12 wasn't even a spec when the game's development began. Games beginning their development phase this year, unless an annual crank-it-out release will benefit from DX12 native from the start due to the likely higher uptake by the time that the game sees release in 2-4 years.

So I'm skeptical of early DX12 releases. It couldn't have been a consideration early in the development phase unless the game itself isn't very well optimized (IE, just started development within the last year).

Its just like the tacked on DX11 features, when DX11 was released.

Not just games, engines beginning their development cycle, also with the refinements of consoles (2nd gen will coexist with 1st gen), this will slow down things too.
 
It's not really speculative. Developers and IHVs have made it very clear that DX12 requires a lot more effort and expertise from developers. All of the threading, resource management and memory management done by DX11 drivers now has to be handled by a DX12 application.

Here are a few good GDC 2016 presentations from both AMD and nVidia on the topic.

http://gpuopen.com/wp-content/uploa...ogramming_Model_and_Hardware_Capabilities.pdf

http://developer.download.nvidia.co...dvancedRenderingwithDirectX11andDirectX12.pdf

The way that DX 11 and 12 work as APIs is that the developer can target specific feature levels. Of those things posted, the developer does not HAVE to use them. Additionally, some of them can be more efficient replacements to the old method. Now, developers can use the old method of supporting multiple GPUs (SLI/Crossfire), continue to ignore (as many developers have), or use the API to hit it at a lower level.

Nvidia lists that slide as "more responsibilities." It should actually be titled, "more options."
 
Since when is RRotR considered a valid data point? It was my understanding that it was dismissed because amd cards didn't do too well with the transition to DX12.

Anyway, Zion, what makes you think this is the first 'true' dx12 game?

Quantum Break is DX12 only as well
Your thread titles make me think you're a journalist of some kind :p

Frankly, I missed that.
 
If we had dx on older platforms then it would only add more variables for devs to grapple with. They already have plenty of difficulties, so i accept the reality that sometimes you need the new OS.
 
The way that DX 11 and 12 work as APIs is that the developer can target specific feature levels. Of those things posted, the developer does not HAVE to use them. Additionally, some of them can be more efficient replacements to the old method. Now, developers can use the old method of supporting multiple GPUs (SLI/Crossfire), continue to ignore (as many developers have), or use the API to hit it at a lower level.

Nvidia lists that slide as "more responsibilities." It should actually be titled, "more options."

Resource management, scheduling, synchronization and memory management are not "options". They are fundamental requirements for writing software. Notice the slide says "needs to", not "can choose to".

No offense to you specifically but it would be nice if people actually took the time to understand how games are made before commenting on these things.
 
Resource management, scheduling, synchronization and memory management are not "options". They are fundamental requirements for writing software. Notice the slide says "needs to", not "can choose to".

No offense to you specifically but it would be nice if people actually took the time to understand how games are made before commenting on these things.

You're glossing over the feature level part. Developers can use the lower feature level to allow DirectX to manage those things as they have in the past. Assuming that I didn't know that doesn't change it.
 
EZspqN.jpg


It's not really speculative. Developers and IHVs have made it very clear that DX12 requires a lot more effort and expertise from developers. All of the threading, resource management and memory management done by DX11 drivers now has to be handled by a DX12 application.

Here are a few good GDC 2016 presentations from both AMD and nVidia on the topic.

http://gpuopen.com/wp-content/uploa...ogramming_Model_and_Hardware_Capabilities.pdf

http://developer.download.nvidia.co...dvancedRenderingwithDirectX11andDirectX12.pdf

Can someone tell me how the FUCK we have these slides pasted all over the forum but somehow Oxide didn't get the memo.

vendOr (not vender! silly nvidia powerpoint monkeys) specific code paths
minimizing resource barriers (hello async on nvidia) {context: it appears that compute queue is just offloaded to graphics queue at driver level, so enabling async shaders ends up with everything on graphics queue + resource barriers, not removed, now unnecessary, causing performance to shit}
 
You're glossing over the feature level part. Developers can use the lower feature level to allow DirectX to manage those things as they have in the past. Assuming that I didn't know that doesn't change it.

What I'm saying is that there is no DX12 feature level that gives you those things for free. For that you have to use the DX11 API. There's a lot of good info on this stuff from the GDC presentations.
 
vendor specific code paths

Yeah if people think that games favoring one architecture over another will go away with DX12 they have a big surprise coming. It's only going to get worse because now it's up to devs to optimize for each IHV without as much help from the driver.
 
Vendor specific code paths were copy/pasted into DX12 so the codepath can't be blackboxed for a competing IHV. Yes it may not be optimal if the IHV hasn't worked with the game engine developer, but at least it can't (or shouldn't) be tampered with.

I'll let you guess who initiated this feature.
 
Vendor specific code paths were copy/pasted into DX12 so the codepath can't be blackboxed for a competing IHV. Yes it may not be optimal if the IHV hasn't worked with the game engine developer, but at least it can't (or shouldn't) be tampered with.

I'll let you guess who initiated this feature.

No you're twisting the truth.

What happened was Oxide made a statement in which they claimed that both AMD/NV had access to their source code and they could make changes as they saw fit, then oxide would have final say on what gets merged with their internal builds

There's absolutely no indication this was ever done

Case in point, Kollock repeatedly stating there is no-ihv specific code. Near the end he conceded that they did add ihv-specific code; that which disabled async shaders automatically for nvidia hw

Which ties into my making a point of how DX12 was mainly a marketing ploy for AotS, that 'no ihv-specific code' thing sounds good to the average consumer, but it's just the opposite of what you should do in dx12. If you don't want that then develop dx11, which runs fine unless you're cpu bottlenecked (aka AMD)
 
No you're twisting the truth.

What happened was Oxide made a statement in which they claimed that both AMD/NV had access to their source code and they could make changes as they saw fit, then oxide would have final say on what gets merged with their internal builds

There's absolutely no indication this was ever done

Case in point, Kollock repeatedly stating there is no-ihv specific code. Near the end he conceded that they did add ihv-specific code; that which disabled async shaders automatically for nvidia hw

Which ties into my making a point of how DX12 was mainly a marketing ploy for AotS, that 'no ihv-specific code' thing sounds good to the average consumer, but it's just the opposite of what you should do in dx12. If you don't want that then develop dx11, which runs fine unless you're cpu bottlenecked (aka AMD)

Oh who's twisting words? ;)

Oxide worked, and still are working with both IHVs, here are a couple links you can readup on : https://hardforum.com/threads/nvidi...k-developer-over-an-alpha-level-game.1872607/

Oxide Developer: "NVIDIA Was Putting Pressure On Us To Disable Certain Settings In The Benchmark"
“Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.”

Nvidia publicly bashing Stardock developer over an ALPHA level game

Oxide had to implement nvidia specific code because nvidia drivers had hooks for Async indicating it could be used, but if you tried to use it all hell broke loose basically.
 
  • Like
Reactions: Zuul
like this
Oh who's twisting words? ;)

Oxide worked, and still are working with both IHVs, here are a couple links you can readup on :

Oxide Developer: "NVIDIA Was Putting Pressure On Us To Disable Certain Settings In The Benchmark"


Nvidia publicly bashing Stardock developer over an ALPHA level game

Oxide had to implement nvidia specific code because nvidia drivers had hooks for Async indicating it could be used, but if you tried to use it all hell broke loose basically.


This just confirms what I said in the previous post.

You're twisting words, allowing ihvs to make changes isn't the same as integrating them into your project.

On top of this you getting me a statement in which oxide says 'we're not biased' would hardly change my opinion of them If i thought they were biased

You present 'nvidia pressure oxide to turn off async shaders' as something damning, it makes sense... It's supposed to improve performance, it doesn't. Turn it off.
 
So Oxide is claiming that with the exception of async the rest of their code is "vendor neutral" and has absolutely no IHV specific optimizations?


Which makes no sense at all, because if it was DX11 versions of their game doesn't coincide with any other DX11 game performance wise (CPU overhead and others) outside of Hitman, which is also a AMD sponsored title, so as you have correctly assessed before, DX12 will not ease the pain of developers when it comes to different IHV's and it will make it worse with certain specific features. Any case, this is the norm for any low level programming routines when it comes to vendor specific optimizations.
 
Last edited:
Back
Top