[RUMOR] Pascal in trouble with Asynchronous Compute

trick0502

Supreme [H]ardness
Joined
Apr 17, 2006
Messages
5,561
http://www.bitsandchips.it/52-engli...scal-in-trouble-with-asyncronous-compute-code

"Broadly speaking, Pascal will be an improved version of Maxwell, especially about FP64 performances, but not about Asyncronous Compute performances. NVIDIA will bet on raw power, instead of Asynchronous Compute abilities. This means that Pascal cards will be highly dependent on driver optimizations and games developers kindness."

enjoy!!
 
Bits And Chips is the Italian version of WCCFTech. I would take anything they say with a dump truck full of salt. Besides, there are a lot of weasel words used in that post. Unnamed sources also usually means "some random post we found on the internet."
 
  • Like
Reactions: DF-1
like this
Well going by their current cards total lack of ASYNC performance, this rumor could very well be true. It's not like they had it from the beginning and the rumor is that it's being removed. With that said, I didn't see much substance on that webpage worth mentioning. This is very well worth tracking on the time to the Pascal launch though. Enthusiasts need to know before they spend their money.
 
Except Pascal is a brand new architecture. We can't make assumptions based on the current generation of hardware, which is what this clickbait is doing.
 
This isn't even a new rumor. Back when the first Ashes of Singularity Bench came out, and people were shocked about AMD's performance in that, and we first learned about Async, there was already a rumor that month that the Pascal cards would not have Async because they already were too far in the development process, and Async won't be coming out until the generation after.

I don't get why people want to be so dismissive. This is nothing but good news - it means that no one brand will kick butt in everything, bringing the old card wars back, which should in turn benefit us in terms of prices.

Sorry Armenius, but I think your post is just wishful thinking.
 
A rumor is a rumor, but if it does turn out to be true without an Async scheduler. How would nvidia plan to use the multi-threading capabilities of DX12 (ie: beyond 2 CPU cores)? It would be a nightmare coding and pre-scheduling 8 CPU cores/threads to 1 rigid GPU queue without bottlenecking anything, and nvidia has to have known this at the design stage of Pascal. Odd...
 
This isn't even a new rumor. Back when the first Ashes of Singularity Bench came out, and people were shocked about AMD's performance in that, and we first learned about Async, there was already a rumor that month that the Pascal cards would not have Async because they already were too far in the development process, and Async won't be coming out until the generation after.

I don't get why people want to be so dismissive. This is nothing but good news - it means that no one brand will kick butt in everything, bringing the old card wars back, which should in turn benefit us in terms of prices.

Sorry Armenius, but I think your post is just wishful thinking.
I'm not wishing for anything, I'm simply dismissing what the link in the OP is saying. The only use rumor and speculation serves is to drive clicks. Which I didn't give them or WCCFTech because I archive these kinds of sites.

[RUMOR] Pascal in trouble with Asynchronous Compute code - Bits and C…
Rumor: Nvidia's Pascal Architecture Is In Trouble With Asynchronous C…
 
A rumor is a rumor, but if it does turn out to be true without an Async scheduler. How would nvidia plan to use the multi-threading capabilities of DX12 (ie: beyond 2 CPU cores)? It would be a nightmare coding and pre-scheduling 8 CPU cores/threads to 1 rigid GPU queue without bottlenecking anything, and nvidia has to have known this at the design stage of Pascal. Odd...


Async compute seemed to be something that took Nvidia aback, to be honest. This is something AMD has been baking into their cards since forever, but it just never got used. So AMD slickly started up with Mantle, which then morphed into Vulkan, which then got partially baked into DX 12, and now by going all open source, they finally found a way to get developers to use otherwise unused parts of their hardware for a better experience, since Devs are eager for the performance gains of DX12, and up until that point, didn't really want to learn how to access a whole new, lower level API.

In football terms, NVidia was content to play a prevent defense, and AMD just scored on a Hail Mary and tied the game. Either team could win, but right now, its all even, and AMD has the momentum.

Anyway, we'd already been hearing about Pascal before Async became a thing, so it stands to reason it was already past the R&D process, and NVidia will be trying to compensate with sheer brute force until they can get the next generation card after Pascal out to market. The key thing here is whether AMD stands pat, or finds a way to keep a competitive advantage through design; now's not the time to be fat and happy after 1 design win which you may have lucked into. Now is the time to press the advantage, so Polaris better be good, and Vega better be something even better.
 
I'm not wishing for anything, I'm simply dismissing what the link in the OP is saying. The only use rumor and speculation serves is to drive clicks. Which I didn't give them or WCCFTech because I archive these kinds of sites.

[RUMOR] Pascal in trouble with Asynchronous Compute code - Bits and C…
Rumor: Nvidia's Pascal Architecture Is In Trouble With Asynchronous C…


Sorry, but you just sound more dismissive than logical.

It always strikes me as funny, when people bitch about "unnamed sources", as while I understand some skepticism, an out of hand dismissal seems foolish since "unnamed sources" are used on just about every level of credible journalism as well.
 
Last edited:
Bits And Chips is the Italian version of WCCFTech. I would take anything they say with a dump truck full of salt. Besides, there are a lot of weasel words used in that post. Unnamed sources also usually means "some random post we found on the internet."

True...


OTOH people have been talking about Pascal coming for months and telling folks to hold off buying new GPUs because it is "right around the corner"....And it is just about April now and there's a whole lotta nothing to show for all that.
 
Async compute seemed to be something that took Nvidia aback, to be honest. This is something AMD has been baking into their cards since forever, but it just never got used. So AMD slickly started up with Mantle, which then morphed into Vulkan, which then got partially baked into DX 12, and now by going all open source, they finally found a way to get developers to use otherwise unused parts of their hardware for a better experience, since Devs are eager for the performance gains of DX12, and up until that point, didn't really want to learn how to access a whole new, lower level API.

In football terms, NVidia was content to play a prevent defense, and AMD just scored on a Hail Mary and tied the game. Either team could win, but right now, its all even, and AMD has the momentum.

Anyway, we'd already been hearing about Pascal before Async became a thing, so it stands to reason it was already past the R&D process, and NVidia will be trying to compensate with sheer brute force until they can get the next generation card after Pascal out to market. The key thing here is whether AMD stands pat, or finds a way to keep a competitive advantage through design; now's not the time to be fat and happy after 1 design win which you may have lucked into. Now is the time to press the advantage, so Polaris better be good, and Vega better be something even better.

I believe if you read back, the use of Async actually started with Sony and the PS4. The PS4 already has better hardware than the XB1 and using Async with their ACE units advantage to get an additional 15-30% of performance was a VERY big deal against the XB1. Microsoft had no choice but to follow suit albeit with less ACE units. Sony clearly had a plan going in with their PS4. Then speculating, AMD forced Microsoft's hand with Mantle to bring out DX12 earlier than planned on the PC by extension of the XB1. So I'd say both MS and nvidia got surprised.

As for Pascal brute forcing to keep up with Polaris with Async... I'll say a dubious "maybe". Polaris would have to have almost a +/-20% disadvantage to breakeven with Pascal in a level DX12/Vulkan dev environment.

Then coding multi-threading at the CPU level is still a major hurdle without the Async scheduler on the GPU. So possibilities like more CPU resources for AI can't happen easily.
 
So they omit the slide from that deck that stated fine grained preemption coming in a later architecture (the slide deck with the moon landing guy) and make assumptions that there hasn't been changes to the scheduler in an unreleased product? Typical poor journalism.

Well if we believe this rumor Yakk, the leak of Polaris 10 having shader through put of gtx 770 should be believed?

AMD Polaris 10 specs and possible benchmark

That will put it 20% less (actually will be around 40% less) than a gtx 1080 or what ever they will call them.

Which actually with 2.5 times the power savings over last generation and what we have seen of Polaris 10 (star wars battlefront demoing from AMD and power usage) it would put it around there.......

First off I don't want to change this to an Polaris to Pascal, thread, but I don't think Polaris is going up against high end and enthusiast level Pascal (GP 104 and GP 100). it is going from notebook, and mid range to mid high level markets, direct competition will be GP 108 derivatives, so AMD might take back quite a bit lost marketshare. So if we want to talk about AMD's products that will go against Pascal that will launch based on rumors next month or so, it will be Vega not Polaris.
 
Last edited:
So they omit the slide from that deck that stated fine grained preemption coming in a later architecture (the slide deck with the moon landing guy) and make assumptions that there hasn't been changes to the scheduler in an unreleased product? Typical poor journalism.

Well if we believe this rumor Yakk, the leak of Polaris 10 having shader through put of gtx 770 should be believed? That will put it 20% less than a gtx 1080 or what ever they will call them.


My big issue is that, both in some of the comments here and in those stories, some of the stuff I am reading has more of a tone of "they aren't telling me that what I want to be true, is true, and because they aren't, it's poor journalism."

People can try to dress it up however they want to not make it sound that way, but I have seen enough of these brand wars, and even been a part of them in my youth, to smell it out when people are too attached to a brand they want to defend, for whatever silly reason.
 
Well I see this launch being very different from previous generations. AMD seems to be taking a different road and might throw in a wrench into nV's plans. If AMD is able to gain a significant amount of marketshare back from nV, which is what it looks like AMD is doing, and if we look at Raja's interviews, sure look like they want notebook and low power systems back, there is a going a big shift in OEM sales (AMD current cards aren't selling well, we have seen this with their left over inventory from ever single conference call) and this will change that quickly. They might not get improved margins per unit, but they will get volume sales.

Back to the original article, when was it that nV has ever gone the road of "raw power", There were times they stretched out architectures too long as did AMD, but neither of these companies plan for brute force approaches in their designs lol, its comical that one would attach gameworks being open sourced to problems with architectural designs lol.
 
Actually I read an interview with NVIDIA and pcper I think. The overtone was very much as this article is assuming with Pascal.
 
hmm it wasn't pcper, and nV has never talked about pascal architecture in depth and their shader pipeline with conjuction with scheduling. Do you remember Mahigan and I asked each other if we had any papers about those? They haven't done it with Maxwell till just a month or two ago. Where would Pascal fit in. Everything that people have been talking about is speculation and rumor not based on anything nV has stated thus far, only based off of Maxwell.

Ironically DX12 API was being created in end of 2013 which as we know is heavily based of DX11.x from the Xbox, both nV and AMD knew this so who ever stated that it was too late to make changes in the Pascal architectures for DX12 needs are sorely mistaken, there would have been more than enough time.....
 
It wasn't about Pascal but async at GDC. Doesn't take much to see the lack of async in Pascal is very likely.
 
Yeah and at GDC, the only issue with Maxwell's architecture was the scheduler is not as robust as AMD's so programmers have to be aware of that and be cautious of doing certain things at certain times, another words there are coding paths for nV that would work just fine. No such thing as lack of async. No such thing as no hardware support of async. All of that stuff is BS that AMD started because Oxide made an assumption.

And the slide thats stated fine grain preemption coming in a later architecture, there has to be changes done to the scheduler to accommodate that, this is not a simple change. Kind of out there example, what is difference from a needs point of view from memory and cache going from wavefront level of 64 to 32? Do you remember when nV started to create smaller warp sizes on their architecture and how it impacted branching and prediction performance based on what cache and registry needs? Small changes but huge ramifications on other parts of the system, this is the same implications of doing fine grain preemption has with the scheduler. What fine grain preemption does is gives the programmer control to do context switching quickly less latency and over head. If you read about it in GCN architecture white papers, you will see what I mean, how this impacts the scheduler and what the scheduler needs to do this.

So there was no such thing as you stated. or you are just not remembering things right which can happen too many talks about it here and every other site out there.
 
Last edited:
Sorry, but you just sound more dismissive than logical.

It always strikes me as funny, when people bitch about "unnamed sources", as while I understand some skepticism, an out of hand dismissal seems foolish since "unnamed sources" are used on just about every level of credible journalism as well.
I said I was being dismissive right in the comment you quoted... :rolleyes:

Credible sources, when cited, are given context as it relates to what is being leaked. We were given no context in the post whatsoever. All it says is "Our sources..." Sources from where? Industry insiders? NVIDIA employees? Game developers?
 
I said I was being dismissive right in the comment you quoted... :rolleyes:

Credible sources, when cited, are given context as it relates to what is being leaked. We were given no context in the post whatsoever. All it says is "Our sources..." Sources from where? Industry insiders? NVIDIA employees? Game developers?


Well, I guess if you admit your own ignorance, then its ok to be ignorant? :/
 
Yeah and at GDC, the only issue with Maxwell's architecture was the scheduler is not as robust as AMD's so programmers have to be aware of that and be cautious of doing certain things at certain times, another words there are coding paths for nV that would work just fine. No such thing as lack of async. No such thing as no hardware support of async. All of that stuff is BS that AMD started because Oxide made an assumption.

And the slide thats stated fine grain preemption coming in a later architecture, there has to be changes done to the scheduler to accommodate that, this is not a simple change. Kind of out there example, what is difference from a needs point of view from memory and cache going from wavefront level of 64 to 32? Do you remember when nV started to create smaller warp sizes on their architecture and how it impacted branching and prediction performance based on what cache and registry needs? Small changes but huge ramifications on other parts of the system, this is the same implications of doing fine grain preemption has with the scheduler. What fine grain preemption does is gives the programmer control to do context switching quickly less latency and over head. If you read about it in GCN architecture white papers, you will see what I mean, how this impacts the scheduler and what the scheduler needs to do this.

So there was no such thing as you stated. or you are just not remembering things right which can happen too many talks about it here and every other site out there.
No Nvidia was speaking of async and how they didn't feel concerned in the near future. If in fact the async debacle was to B alleviated I am not so sure NVIDIA in that interview would have been so adamant in their stand.

Not fact but just educated conjecture based on their response. And all this was after the preemption slides.
 
err do you know when those slides were first shown? It wasn't GDC.... it was Maxwell 2's launch deck.

This is the problem when you have poor journalists that don't understand what they are saying because they take things out of context. Now this is the third of forth if not the fifth time we have talked about this, you still presume Maxwell 2 was incapable of async due to hardware limitations lol, Even after GDC talks where AMD and nV both stated in conjunction with each other what is good and bad for their own architecture on stage at the same time....... What does that say about putting someone's head in the sand? Oh yeah you can try to twist and curve your words to match but it doesn't make sense when engineers from both sides unequivocally stated what I have stated from the moment this crap discussion about hardware supported async started close to 8 months ago.

dxdqFoK.png


This is the slide.
 
Last edited:
Well, I guess if you admit your own ignorance, then its ok to be ignorant? :/
I completely read both the Bits and Chips and WCCFTech articles and their sources (as minimal as they are), and I am dismissing what they said because none of the content had any merit. How can this be borne out of ignorance? I don't know how I can explain this any clearer.
 
I completely read both the Bits and Chips and WCCFTech articles and their sources (as minimal as they are), and I am dismissing what they said because none of the content had any merit. How can this be borne out of ignorance? I don't know how I can explain this any clearer.

I guess clarity isn't your strong suit? Considering that I said my main bone of contention was dismissing something out of hand because you disagree with the findings?

If someone wants to take what was reported with a grain of salt - sure, fine. I won't begrudge that. But to say "Unnamed sources also usually means "some random post we found on the internet."" is flat out ignorant and dismissive for all the wrong reasons. Every single professional journalist uses unnamed sources. Every. Single. One.

So, sorry, that just sounded ignorant to me, like you were looking for an easy out to dismiss what was said. Especially since logically, what was reported makes a whole hell of a lot of sense.
 
I guess clarity isn't your strong suit? Considering that I said my main bone of contention was dismissing something out of hand because you disagree with the findings?

If someone wants to take what was reported with a grain of salt - sure, fine. I won't begrudge that. But to say "Unnamed sources also usually means "some random post we found on the internet."" is flat out ignorant and dismissive for all the wrong reasons. Every single professional journalist uses unnamed sources. Every. Single. One.

So, sorry, that just sounded ignorant to me, like you were looking for an easy out to dismiss what was said. Especially since logically, what was reported makes a whole hell of a lot of sense.


Actually what is highlighted in red wasn't the case 10 years ago, Bloggers do, journalists don't, And if journalists can't tell who their sources are, they will find proof to support the claims of the source to substantiate what they stated but since the headline of the article was stated as Rumor, its neither here or there.
 
Actually what is highlighted in red wasn't the case 10 years ago, Bloggers do, journalists don't, And if journalists can't tell who their sources are, they will find proof to support the claims of the source to substantiate what they stated but since the headline of the article was stated as Rumor, its neither here or there.


Revisionist history and bullshit. Sometimes, the only way to get information is to protect your source, which is as old as the profession itself, and where "unnamed sources" comes from.

So stop spouting BS you want to be true and actually learn something. "Unnamed source" is as old as journalism itself.
 
Revisionist history and bullshit. Sometimes, the only way to get information is to protect your source, which is as old as the profession itself, and where "unnamed sources" comes from.

So stop spouting BS you want to be true and actually learn something. "Unnamed source" is as old as journalism itself.


No Journalists are upheld by liable, bloggers should be too, but they play loose and fast and since they can redact things just as quickly as they post, it is not frowned upon, this is why you don't see articles like this one on New York Times, or syndicated papers online, not to mention when they say something under the pretense of Rumor, it definitely won't be. This is why even if someone writes something that is libel, then says sources are there but can't be revealed, if taken to court those sources have to be revealed, if its dangerous to reveal, the judge can order to do it in closed chambers where he and him alone as access to the source and judgement will be based on that.

There is no such thing as protecting the source but in this case if listed as rumor, it has no pretense since the writer is already saying it can be BS. More notable news sites have opinion sections, that is exactly what they write, they might base it off of something that was stated by someone *their source* but it is placed as opinion since there is no proof of what the source stated.
 
Last edited:
No Journalists are upheld by liable, bloggers should be too, but they play loose and fast and since they can redact things just as quickly as they post something its not frowned upon enough, this is why you don't see articles like this one on New York Times, or syndicated papers online, not to mention when they say something under the pretense of Rumor, it definitely won't be. This is why even if a someone writes something that is libel, then says sources are there but can't be revealed, if taken to court those sources have to be revealed, if its dangerous to reveal, the judge can order to do it in closed chambers where he and him alone as access to the source and judgement will be based on that.

There is no such thing as protecting the source but in this case if listed as rumor, it has no pretense since the writer is already saying it can be BS.

Bullcrap.

SPJ Ethics Committee Position Papers: Anonymous Sources | Society of Professional Journalists | Improving and protecting journalism since 1909


Outlines conditions and how a respected journalist should handle unnamed sources. There IS such a thing.

Look. I am not going to be drawn into a strawman and forced into defending the credibility of those sites at large, as that never was my argument in the first place. My contention was the very flimsy grounds on which those sites were dismissed, seemingly tinged not with a checkered history on their part, but more willful ignorance and desire to conform to the mental status quo on those doing the dismissing, to the point of saying things that frankly, don't hold water when it comes down to what grounds something should be dismissed on.

I'm not really telling people what to believe - just don't be lazy and ignorant about how you go about it.
 
I am familiar with the shield laws in the US. But the Shield laws don't protect against liable. Nor is it causation from a federal point of view as there are no federal shield laws, its on a per state basis and they vary per state, and no state shield laws protect against liable.. Get your info right before you post.

But we are pretty much saying the same thing at least from what I have understood from what you are saying, so I don't know why we are arguing lol.
 
I am familiar with the shield laws in the US. But the Shield laws don't protect against liable. Nor is it causation from a federal point of view as there is no federal shield laws, its on a per state basis and they vary per state, and no state shield laws protect against liable.. Get your info right before you post.

You're trying to straw man the conversation, and I won't let you. I stand by what I said. You're just trying to win a pissing contest.
 
well still dont' know why you are posting as a straw man's BS, you did read the point where I stated
Actually what is highlighted in red wasn't the case 10 years ago, Bloggers do, journalists don't, And if journalists can't tell who their sources are, they will find proof to support the claims of the source to substantiate what they stated but since the headline of the article was stated as Rumor, its neither here or there.
 
Who cares? Wait till reliable information is released or better yet test the actual hardware and see how it performs.

I do believe AMD is in more need to gain market share back in the mobile market where they lost heavily over Kepler but mostly Maxwell. If AMD has something in the performance range besides the Radeon Pro (Whompi do) is to be seen.
 
Yea I have to agree with Xorbe. Too many damn rumors. It is so easy for people to make up shit on the internet and pass it as the truth.

Sure it is possible Pascal will suck with Async Compute, but it is also possible Polaris won't be any faster then a 390x.

Right know who the fuck knows. I will wait to see real benchmarks (not canned) with hopefully unbiased reviews.
 
  • Like
Reactions: DF-1
like this
Yea I have to agree with Xorbe. Too many damn rumors. It is so easy for people to make up shit on the internet and pass it as the truth.

Sure it is possible Pascal will suck with Async Compute, but it is also possible Polaris won't be any faster then a 390x.

Right know who the fuck knows. I will wait to see real benchmarks (not canned) with hopefully unbiased reviews.


Completely agree with this.
 
The telling part can be just as much what you aren't seeing as what you are. Pascal is in theory months away and NOTHING has leaked or been provided to show async or hardware scheduling. If anything it's notably missing from Pascal, Volta, etc on their slides. Nvidia indicated they would like to add it in the future, just never provided the timeline. If Pascal supported async I'd think Nvidia's marketing department would leak a slide to put a rest to it. What we seem to be seeing however is playing it off as insignificant and saying it's supported on current hardware. Despite performance evidence to the contrary. Instead of pushing what you'd think would be a selling/upgrade point for the future they went on the offensive against the feature. You also have that GDC presentation where they're pushing people towards graphics over compute. Nobody is going to come out and say they don't support it. You have to read between the lines. That presentation is useless to games in late development. That's going to be geared towards the games of tomorrow. That would seem to make it likely they want compute to be avoided. So it seems likely they either don't support it or aren't competitive with it.

Then speculating, AMD forced Microsoft's hand with Mantle to bring out DX12 earlier than planned on the PC by extension of the XB1. So I'd say both MS and nvidia got surprised.

As for Pascal brute forcing to keep up with Polaris with Async... I'll say a dubious "maybe". Polaris would have to have almost a +/-20% disadvantage to breakeven with Pascal in a level DX12/Vulkan dev environment.

Then coding multi-threading at the CPU level is still a major hurdle without the Async scheduler on the GPU. So possibilities like more CPU resources for AI can't happen easily.
I'd have to agree it caught them by surprise. The shift towards the low level APIs seems to have come late and IMHO only because it ultimately became necessary for VR. They had the opportunity to make the move years earlier. Only once a lot of money got thrown at VR and the overhead of the old APIs was a liability did they switch. Mantle was created because MS wouldn't make the API some devs were wanting. Just looking at the current marketshare there is ample evidence why they wouldn't want to change. Nvidia certainly isn't better off from a business standpoint with DX12.
 
The telling part can be just as much what you aren't seeing as what you are. Pascal is in theory months away and NOTHING has leaked or been provided to show async or hardware scheduling. If anything it's notably missing from Pascal, Volta, etc on their slides. Nvidia indicated they would like to add it in the future, just never provided the timeline. If Pascal supported async I'd think Nvidia's marketing department would leak a slide to put a rest to it. What we seem to be seeing however is playing it off as insignificant and saying it's supported on current hardware. Despite performance evidence to the contrary. Instead of pushing what you'd think would be a selling/upgrade point for the future they went on the offensive against the feature. You also have that GDC presentation where they're pushing people towards graphics over compute. Nobody is going to come out and say they don't support it. You have to read between the lines. That presentation is useless to games in late development. That's going to be geared towards the games of tomorrow. That would seem to make it likely they want compute to be avoided. So it seems likely they either don't support it or aren't competitive with it.


I'd have to agree it caught them by surprise. The shift towards the low level APIs seems to have come late and IMHO only because it ultimately became necessary for VR. They had the opportunity to make the move years earlier. Only once a lot of money got thrown at VR and the overhead of the old APIs was a liability did they switch. Mantle was created because MS wouldn't make the API some devs were wanting. Just looking at the current marketshare there is ample evidence why they wouldn't want to change. Nvidia certainly isn't better off from a business standpoint with DX12.


Ironically it was last year's GDC when nV re put that slide in their VR deck, so did it really catch them by surprise as you so aptly put, when they stated more API's coming? You guys take things like they can turn things on a dime, when these things are planned well in advance of the final architecture design finalization.

http://on-demand.gputechconf.com/gtc/2015/presentation/S5668-Nathan-Reed.pdf

Lets turn this around and make another ludicrous statement. AMD was side swiped by the early launch of Maxwell 2, hmm they had no clue it was coming since they had Maxwell a year before to contend with. No they knew what was coming, they just couldn't stop it. That is too late, but having understood that a new api was coming a year ago, and talking about those possible changes *that shows they knew what was going to happen even before then, it had to quite before, because they already talk about where they need to increase performance or change architecture to push VR.

The last time any company screwed up with DX specs was the FX series, that was because nV left the group.

When ever these companies screwed on architecture out side of the FX, which was only AMD btw, the architecture wasn't bad compared to what they had from a previous generation, it looked bad compared to what their opposition made.

Its naivety and generality to believe companies that have vested interest and billions of dollars in a future architecture, would make screw ups on an API specification that lays down the guidelines for the entire industry.

Also when has nV "leaked" features that will come out with the show of performance prior to launch....ever? AMD has never done so either. By doing so gives away what you are doing, and can be looked into by their competition and create ways to help market their own products.

God its like you forget the past on how these companies launch their cards since forever, just to try to show credibility to your theories that are so far off, you can't even call them theories.
 
Last edited:
Are there any games showing significant speed ups attributed specifically to async compute? Everything I've seen so far points to DX12's low CPU overhead vs AMD's relatively poor DX11 driver efficiency accounting for most of the difference.

I suppose to get a definitive answer we would need a benchmark with an async on/off toggle.
 
Back
Top