[RUMOR] Pascal in trouble with Asynchronous Compute

Well if you want to believe AMD and nV engineers, 10% is the max we will see from Async in the near future lol, Of course it doesn't stop people taking "DX12" benchmarks and games and try to attribute more than that to async........
 
Some showing 390x at 980Ti levels.

RAZOR: Do us a favor and do a single post with something currently negative about Nvidia other than price with no addendum. Nothing else, just that. You look like an Nvidia employee. I learned never to trust anyone that never has anything negative to say about a company. So thus far I can't trust a word you say in regards to NVIDIA.
 
Some showing 390x at 980Ti levels.

RAZOR: Do us a favor and do a single post with something currently negative about Nvidia other than price with no addendum. Nothing else, just that. You look like an Nvidia employee. I learned never to trust anyone that never has anything negative to say about a company. So thus far I can't trust a word you say in regards to NVIDIA.


I will tell what I see negative from nV, right now, is their next launch for Pascal, they will lose ground in notebook and low end systems quickly, Also because Big pascal isn't coming out, this is going to give way to Intel coming into their compute business, which although going to start off slow, is going to eat up their Tesla business quickly. I have stated this close to a year ago, nV has to be careful with Pascal and Intel phi launch timings, and it is now happening, I dont' see big pascal coming out this half of the year and the new phi has been there for close to 6 months now. With direct compute and HLSL 6.0 feature set, the cuda feature set has much less advantages, HLSL 6.0 is about to come out btw if you didn't know. Something I was getting at with Anarchist4000 a while back. Now JustReason, I want you to try to keep and open mind here, and tell me, what have you gleamed into async from 8 month ago? So far I see nothing, but rhetoric, that has been known and proven as false with regards to no hardware support on nV cards.

From what AMD and nV has stated so far for their next gen architectures, I don't see how anyone can gleam on weaknesses from both sides. The only thing I can see is how they are presenting their next launch chips and at what markets.

From AMD's point of view, they are targeting OEM's, from nV's point of view they are targeting consumer upgrades first as they have always done.

If you don't want to play nice, I can post the hundreds of posts here that you have stated the same BS over and over again about async, and ever single post I have made about aysnc and then take a look at the GDC presentation and see what you have stated, peiter3dnow, and anarchist4000 have stated as BS.

Do you want that? or should I do it? I'm not doing anything for Easter holidays, so, guess what I have the time.....

about the 390x same levels as the 980 ti, what was the benchmark with async off? or is that just a figment of our imagination? I guess when engineers tell you 10% is the max they are seeing from async from both camps, you will take the word of a marketer anyways. Hmm the idiots that make these things don't know what they are doing I guess, but the marketer can take the idiots to the next level.......

Telling ya man you got change your call sign lol, JustReason, doesn't fit with the way you post, it truelly is no reasoning.

Before all this I want you to take a look at this document


https://software.intel.com/sites/de...tions-and-DirectX-features-in-JC3_v0-92_X.pdf

Oh wait Intel is pushing DX12.1, and ROV's, what if AMD's next gen GPU doesn't come with DX12.1, they might not because they have been downplaying DX12.1 and its features so they won't come out with DX12.1 on their next gen GPU's, they are going to have problems. Oh yeah these are caps too, so it could have been because nV wanted those features so MS created the caps just for nV so AMD would not have had time to put those features in their next gen GPU's since they wouldn't have know about them in time even though it was 1 year prior to the launch of Polaris when they found out about DX12.1.

PS this is from GDC so yeah Intel is keen on their new IGP's since they are dx12.1
Do you see how silly that is? Even though we know 100% AMD doesn't support DX12.1 and ROV's right now in hardware, that doesn't mean they aren't going to be doing it for a new architecture which is about to debut even though they downplayed those features because nV had them prior.
 
Last edited:
Are there any games showing significant speed ups attributed specifically to async compute? Everything I've seen so far points to DX12's low CPU overhead vs AMD's relatively poor DX11 driver efficiency accounting for most of the difference.

I suppose to get a definitive answer we would need a benchmark with an async on/off toggle.
Ashes of the Singularity Revisited: A Beta Look at DirectX 12 & Asynchronous Shading
AOTS had benchmarks with a toggle for async on/off in DX12 mode. Only game I've seen so far where it was that obvious. Turn it on and you get up to 20% at 4k with a FuryX. Devs stated that only about 20% of the load was compute, so async benefits wouldn't exceed that. DX12 made a bigger difference, but async still gave a significant increase.

Some showing 390x at 980Ti levels.

RAZOR: Do us a favor and do a single post with something currently negative about Nvidia other than price with no addendum. Nothing else, just that. You look like an Nvidia employee. I learned never to trust anyone that never has anything negative to say about a company. So thus far I can't trust a word you say in regards to NVIDIA.
Good luck with that, I only got him to admit he was wrong after linking a whitepaper where it clearly proved him wrong in the introduction. If it's not clearly spelled out in Nvidia marketing material it won't happen. Most Nvidia publications don't list all their weaknesses for some reason so it's difficult.
 
No Journalists are upheld by liable, bloggers should be too, but they play loose and fast and since they can redact things just as quickly as they post, it is not frowned upon, this is why you don't see articles like this one on New York Times.
Really?
A U.S. judge ordered New York Times reporter Judith Miller to jail Wednesday for refusing to divulge her source in the ...
Reporter jailed for refusal to name leak source

It happens all the time in the real press
 
Bits And Chips is the Italian version of WCCFTech. I would take anything they say with a dump truck full of salt. Besides, there are a lot of weasel words used in that post. Unnamed sources also usually means "some random post we found on the internet."
Actually they've been talking about Maxwell's lack of it for a long time now. It doesn't really surprise me that Pascal won't have it either.
 
Well if you want to believe AMD and nV engineers, 10% is the max we will see from Async in the near future lol, Of course it doesn't stop people taking "DX12" benchmarks and games and try to attribute more than that to async........
I know, right. It's never a big deal when nVidia is slower. Never enough to matter. nVidia has single digit advantage though and AMD = FAIL!. lol

10% in this rivalry is night and day, and you know it. ;)
 
I will tell what I see negative from nV, right now, is their next launch for Pascal, they will lose ground in notebook and low end systems quickly, Also because Big pascal isn't coming ou... blah blah blah

1- No ASYNC COMPUTE
2- Bad drivers since win10 got released
3- Lies, Lies and more Lies. 3.5gb, Async in Drivers, 10x Pascal Performance...
4- Gimping Kepler Performance just look at Steam VR Test LOL
5- GimpWorks : AKA cancer

and BTW Pascal isnt going to have Async, everybody knows that. sooo keep waiting until volta.

original.gif
 
Really?
A U.S. judge ordered New York Times reporter Judith Miller to jail Wednesday for refusing to divulge her source in the ...
Reporter jailed for refusal to name leak source

It happens all the time in the real press


yes its the reporters prerogative to do such a thing, but to go to contempt of court to admit what ever is going on is what is going to happen. Without significant proof the reporter wrote or stated something is correct, they will get punished at the end of it all.

You see, you are looking at criminal case in that one, civil cases are much different than criminal which is where a libel case would be done. And if a reporter goes into Contempt in such a case, its pretty much admitted they are guilty.
 
Last edited:
I know, right. It's never a big deal when nVidia is slower. Never enough to matter. nVidia has single digit advantage though and AMD = FAIL!. lol

10% in this rivalry is night and day, and you know it. ;)


LOl 10% is nothing to sneeze at but that wasn't where I was going. Only a people that think it was more than that when reading DX11 to DX12 benchmarks, thinking it was 30% or more, just a farce.

Unfortunately, after that entire statement, they only take that 10% but miss the rest of the statement, funny isn't it.
 
Ashes of the Singularity Revisited: A Beta Look at DirectX 12 & Asynchronous Shading
AOTS had benchmarks with a toggle for async on/off in DX12 mode. Only game I've seen so far where it was that obvious. Turn it on and you get up to 20% at 4k with a FuryX. Devs stated that only about 20% of the load was compute, so async benefits wouldn't exceed that. DX12 made a bigger difference, but async still gave a significant increase.


Good luck with that, I only got him to admit he was wrong after linking a whitepaper where it clearly proved him wrong in the introduction. If it's not clearly spelled out in Nvidia marketing material it won't happen. Most Nvidia publications don't list all their weaknesses for some reason so it's difficult.


What, where, you never stated anything of the sort, I linked the paper and it showed what I stated was correct. Then I linked to a AMD developer Presentation which stated I was correct. Then I linked to forum posts at B3D which stated I was correct.

You linked to nothing!

You lieing SOB!

Yes I will linking everything in one post, just to SUTFU.

Liar! I will do this tonight and tomorrow, just for you!
 
Last edited:
Ashes of the Singularity Revisited: A Beta Look at DirectX 12 & Asynchronous Shading
AOTS had benchmarks with a toggle for async on/off in DX12 mode. Only game I've seen so far where it was that obvious. Turn it on and you get up to 20% at 4k with a FuryX. Devs stated that only about 20% of the load was compute, so async benefits wouldn't exceed that. DX12 made a bigger difference, but async still gave a significant increase

Thanks. That's a pretty good increase. It's obvious that when GPUs report 100% usage it's not quite accurate, otherwise there would be no free resources to take advantage of async.

Hopefully we get better, more transparent tools that show exactly what's going on under the hood. A more user friendly version of GPU view would be cool.
 
1- No ASYNC COMPUTE
2- Bad drivers since win10 got released
3- Lies, Lies and more Lies. 3.5gb, Async in Drivers, 10x Pascal Performance...
4- Gimping Kepler Performance just look at Steam VR Test LOL
5- GimpWorks : AKA cancer

and BTW Pascal isnt going to have Async, everybody knows that. sooo keep waiting until volta.


1) Yeah, for you it will never be there, because you don't know what it is anyways. lets see, if I can paint this in a way you can understand.
So a buffalo is a kind of bull so how can you get chicken wings off it?


2) 3.5 gb is still in court, I have signed the petition for it have you?

3) Keplar doesn't' have some features that are needed for VR. I don't think nV has ever marketed Keplar for VR either.

4) You are cancer.
 
Thanks. That's a pretty good increase. It's obvious that when GPUs report 100% usage it's not quite accurate, otherwise there would be no free resources to take advantage of async.

Hopefully we get better, more transparent tools that show exactly what's going on under the hood. A more user friendly version of GPU view would be cool.


The problem with that benchmark, is going to higher resolutions also puts strain on the PS portion of the program, and so you can't just say the 20% increase (or differential in this case) is all from compute *(async), And we know that AMD hardware does better at higher resolutions to begin with. So people are making assumptions when the engineers have told us something to a straight point of what they are seeing and telling the dev community but others want to believe something else. That is just stupidity.
 
Last edited:
1) Yeah, for you it will never be there, because you don't know what it is anyways. lets see, if I can paint this in a way you can understand.



2) 3.5 gb is still in court, I have signed the petition for it have you?

3) Keplar doesn't' have some features that are needed for VR. I don't think nV has ever marketed Keplar for VR either.

4) You are cancer.

So much nvidia, so much LIES!

k1g5jCe.png
 
Well if we believe this rumor Yakk, the leak of Polaris 10 having shader through put of gtx 770 should be believed?

AMD Polaris 10 specs and possible benchmark

If you believe that Polaris 10 on a new process node will run at 800 MHz, sure, be my guest! believe what you will!... it isn't like engineering samples are heavily underclocked and just tested to see that they just work!

^^ That is the reason why noone believes that "leak"
 
I don't believe it either, with the context of what they have shown so far, its possible, but I won't take it as truth.

But in the context for node vs mhz, we can't make any assumptions on that right now because the architecture has too much to do with frequency to make any determinable out come.
 
here ya go Anarachist4000 and JustReason episode 1:

Tag team wrestle mania. Whallops, folly's and crashes.




AMD Zen Rumours Point to Earlier Than Expected Release




Now the entire conversation was about HSA, but you come in and start talking about Graphics API’s, which wasn’t even the premise of the conversation, I tried explaining that to you for another page, finally you started understanding what I stated. Being bull headed and going a totally different direction to what was being stated just goes to show ya.



The funny thing is it’s the same people in this thread that are in all the threads.



JustReason, Anarchist400, Peiter3dnow. All of you guys use the same tactics of myopia.

AMD Recruits Hitman for Dx12 Workload Management on ACEs

That still relies on the shaders having enough information to make informed optimizations. Tessellation for example, why even lock the levels to 8x/16x/64x and not vary them by distance? 64x on that tree off in the distance is overkill, but how would a driver know it's off in the distance? Even on Nvidia cards the performance hit often doesn't seem worth it. Devs have been implementing LOD in games for ages.


I'm still waiting to see some tessellation benchmarks with DX12. I'm guessing the async and concurrent nature of DX12 allows AMD cards to largely bypass that tessellation bottleneck they've had. Especially as the amount of compute workloads increases.



For a person that say others don’t know their basics about graphics programming, to make a statement like this is out of this world.

It took a half page of posts to explain to you the basics of the tessellation pipeline and the why I stated was true, when what you stated was false, and if you truly did know what you are talking about, you just lied!



Links and all, you had none in any of your posts, you just probably assumed adaptive tessellation was automagical and didn’t need a tessellation factor.



And again same people, Anarachist, and JustReason.



This is getting to be a habit, maybe you guys should shack up together I see love in the air!

Nvidia showed off "Pascal" w/ Maxwell GPUs, later removes photos.

This entire post from this point on Anarchist, you didn’t understand that the VS needs the information from the HS to compete it works, from a person that doesn’t know the basics, to another that surely doesn’t know his…..


Again its Anarchist4000 and JustReason, maybe shacking up isn’t enough, get room!


Next up Anarchist4000 and JustReason, Sex, Lies, and video tapes! Coming in a few hours. Too bad the search feature only goes back a certain amount of pages of posts, I know there were more prior to these.
 
Last edited:
1- No ASYNC COMPUTE
2- Bad drivers since win10 got released
3- Lies, Lies and more Lies. 3.5gb, Async in Drivers, 10x Pascal Performance...
4- Gimping Kepler Performance just look at Steam VR Test LOL
5- GimpWorks : AKA cancer

and BTW Pascal isnt going to have Async, everybody knows that. sooo keep waiting until volta.

I can never decide if I want to laugh or sigh at your posts. They are generally half nonsense and the other half true for both companies. Always dramatic.

One thing is for sure. I am going to wait for the max area chips, gen II VR and then reassess. I feel like the early adopters are going to get screwed harder than usual this go around.

Seems like the general consensus from relatively unbiased posts is Async may be a moderate discriminator. Even if Pascal doesn't have it, might not be a devastating feature to partially comply with. Seems like Maxwell does ok at AoS anywho. We'll see...
 
Well when shit hits the fan it hits everyone, so by association of your posts yeah better believe it, unlike you and others, i don't give a damn if it hits me, because thats the way I am, I'm not here to play nice, I'm here to have a good conversation with something I'm passionate about, unfortunately, people here definitely have to prove their point of views with misconstrued links, lies and unfounded posts lol. And when called out, well they should have the understanding it will be called out.

Sorry juxtaposition is a bitch and is unavoidable.
 
Well, shit. I hope this is true. If AMD actually starts kicking nVidia's ass, we should see more technological progress.
If they only could put some fight against Intel, that would be great as well.
Both Intel and nVidia are just milking us for our money.
Remember 20 years ago? The competition, there were not two but several CPU/GPU makers, tech world exploded, we digitalized everything. Between 1995-2005 was when we saw insane technological jumps. I mean, compare how we lived in 2000 and how we live now. I want that back. I want competition, I want tech giants fighting against each other to earn our money, not us fighting to get their stuff.
 
Well, shit. I hope this is true. If AMD actually starts kicking nVidia's ass, we should see more technological progress.
If they only could put some fight against Intel, that would be great as well.
Both Intel and nVidia are just milking us for our money.
Remember 20 years ago? The competition, there were not two but several CPU/GPU makers, tech world exploded, we digitalized everything. Between 1995-2005 was when we saw insane technological jumps. I mean, compare how we lived in 2000 and how we live now. I want that back. I want competition, I want tech giants fighting against each other to earn our money, not us fighting to get their stuff.

i think we are about to see cheaper gpus this year, hopefully nothing above $500.
 
Well, shit. I hope this is true. If AMD actually starts kicking nVidia's ass, we should see more technological progress.
If they only could put some fight against Intel, that would be great as well.
Both Intel and nVidia are just milking us for our money.
Remember 20 years ago? The competition, there were not two but several CPU/GPU makers, tech world exploded, we digitalized everything. Between 1995-2005 was when we saw insane technological jumps. I mean, compare how we lived in 2000 and how we live now. I want that back. I want competition, I want tech giants fighting against each other to earn our money, not us fighting to get their stuff.

Yep, the 970 was really milking you for your money. The 980Ti really milked your money. Damn Nvidia and their pricing.

You don't seem to realize GPU/CPU progress is now limited by manufacturing ability more than designs. If you want to go back to the days of Phenoms and Pentiums then have fun lol.
 
I hope this is true. They will have to price it lower so I'll have more gpu power to bruteforce DX11 games. And Nvidia will have more incentives to push Vulkan
 
I hope this is true. They will have to price it lower so I'll have more gpu power to bruteforce DX11 games. And Nvidia will have more incentives to push Vulkan

I think they already do, due to Tegra. If anything it's AMD that needs an incentive to push Vulkan since they're now all about DX12 and kumbaya. They don't have a huge presence in laptops and pretty much zilch in the phone/tablet space. They're also lagging behind in Linux.
 
Well if you want to believe AMD and nV engineers, 10% is the max we will see from Async in the near future lol, Of course it doesn't stop people taking "DX12" benchmarks and games and try to attribute more than that to async........

...and when you consider AMD cards typically consumer more power compared to their counterparts, they probably balance out if you look at it from the perspective of performance per watt. Still will be interesting if down the road they can extract more performance from it, as it's still relatively new, but I'd still like to see AMD take the top spot for a while.
 
Both AMD and nV have same (similar) limitation when it comes to node size and the only thing that will every make one better than the other in a generation is architectural design. Its been a long time since AMD (ATi at the time) has been able to out right out design nV, for what ever reasons (r300), since then nV has been able to keep up with AMD/ATI easily. Since the g80, nV has been able to keep up with AMD and usually best them with performance, performance per watt, etc, even when later on their cards don't age as well.

This is the first time, we might actually see both companies successfully launch new architectures on a new node at similar times. We have never seen this before. What we have seen is one or the other company screw up lol, AKA, r600, FX, rv6x0 and so on.

Keep this in mind, when everyone was thinking the G80 was going to look like a doubled up 7800 gpu with seperate pixel shaders and vertex shaders, nV came out and blew away our expectations of what they could do. A small amount of foresight from an experience like this, should tell us even though they are down playing something, that has nothing to do with what is potentially coming out within 1 year of what they were trying to avoid programmers to program for. The downplaying has everything to do with what they are try to sell right now, not unannounced products.

But yet people put A+B together as if its PBJ and try to make a point. Its folly when A and B have nothing to do with each other.
 
Last edited:
I'm not counting on pascal being as bad as maxwell at concurrent async + graphics workloads, but if it is I'll be thrilled. A better chance for amd to claw back some marketshare, and a better chance to use a graphic with the tears of nvidia fans being used as the coolant for a fury x card.
 
I expect this rumor to actually be true. What type of a real-world impact it will have on games is another matter entirely.
 
Yep, the 970 was really milking you for your money. The 980Ti really milked your money. Damn Nvidia and their pricing.

You don't seem to realize GPU/CPU progress is now limited by manufacturing ability more than designs. If you want to go back to the days of Phenoms and Pentiums then have fun lol.

What? They were top of the line CPU's back then. Just like 6700k/59xxk today are. Don't tell me you're comparing today's top CPU's to 10 year old CPU's? Of course they are going to be slower and crap, but so will 6700k/59xxk in 10 years.
980Ti showed up shortly after Titan X offering nearly the same performance at 60% of the price, that's MILKING.
970 is shite for 4k and also 3.5gb + 0.5gb ;)
 
I expect this rumor to actually be true. What type of a real-world impact it will have on games is another matter entirely.
This is where I am at on this. But as always, we have been in the let's see what all this actually means when playing games. This is another T&L argument as far as I'm concerned.
 
Hey razor, how about you prove where I am wrong, where I am stating fact not assumption on unreleased products . Fact is in that Zen thread I handed you and the others their behind because I backed up every point VERBATIM with links and quotes.

Maybe when you decide you can be reasonable and treat myself and others with respect you may receive it in turn. I didn't used to mind you a great deal, as you seem to have some knowledge, but your lack of respect for others opinions is becoming quite telling.

In the case of this thread, can you post verbatim quotes of Nvidia claiming they will be able to do hardware async in relation to DX12? Not referencing preemption or other terms sidestepping or obfuscating the simple question aforementioned.
 
You kids are going to have to play nice or we are going to have to start limiting access.
 
I think market equilibrium will happen regardless of what the outcome of performance due to features will be, the main reason why AMD lost so much ground was lack of product more than anything else.


Hey razor, how about you prove where I am wrong, where I am stating fact not assumption on unreleased products . Fact is in that Zen thread I handed you and the others their behind because I backed up every point VERBATIM with links and quotes.

Maybe when you decide you can be reasonable and treat myself and others with respect you may receive it in turn. I didn't used to mind you a great deal, as you seem to have some knowledge, but your lack of respect for others opinions is becoming quite telling.

In the case of this thread, can you post verbatim quotes of Nvidia claiming they will be able to do hardware async in relation to DX12? Not referencing preemption or other terms sidestepping or obfuscating the simple question aforementioned.

Just look at the GDC presentation, they stated right there, it was only two weeks back......

I treat people with the same amount of respect as they show to me and others. When you cross that line, I will too, and have done with the same amount of conviction as your posts.

Please the Zen thread you had no idea of what you were talking about as others have stated the same thing I did. Pretty easy to see that.
 
Last edited:
Hitman Developers Compare DirectX 12 Against DirectX 11; Advanced Visual Effects Showcased | DualShockers

5% - 10% more performance. Just enough in my opinion for all the AMD cards to rise 1 tier in performance. Of course these are the first attempts at DX12; might become larger later on.

WCCFTECH did an article on it also.
Async Compute Only Boosted HITMAN's Performance By 5-10% on AMD cards; Devs Say It's "Super Hard" to Tune
We are doing real world gaming testig now. Don't really get caught up in the canned benchmarks. But thanks for the links we have seen 1000 times.

And this post is off topic, so let's get back to Pascal. :)
 
Hitman Developers Compare DirectX 12 Against DirectX 11; Advanced Visual Effects Showcased | DualShockers

5% - 10% more performance. Just enough in my opinion for all the AMD cards to rise 1 tier in performance. Of course these are the first attempts at DX12; might become larger later on.

WCCFTECH did an article on it also.
Async Compute Only Boosted HITMAN's Performance By 5-10% on AMD cards; Devs Say It's "Super Hard" to Tune


Interesting, these bullet points especially caught my attention :

  • The developers were surprised by how much they needed to be careful about the memory budget on DirectX 12
  • Priorities can’t be set for resources in DirectX 12 (meaning that developers can’t decide what should always remain in GPU memory and never be pushed to system memory if there’s more data than what the GPU memory can hold) besides what is determined by the driver. That is normally enough, but not always. Hopefully that will change in the future.

This seems to be *VERY* a good reason why AMD drivers now all have access to System memory in addition to GPU memory. IE: a 280x has 3GB onboard VRAM + 6GB System RAM (pagefile type) = a total of 9GB accessible memory. This would take away some burden from the developers.
 
To reiterate what Kyle has already said, lets get this train back on the right track
 
I told everyone last year on WCCFTech what was going to happen with Nvidia and the Gaming Media.

First, they are going to lie about Async Compute being in Nvidia's card, then once that gets debunked, they are going to lie about it being in PASCAL's card, then once that gets debunked, then they are just going to lie about the whole efficacy of Async Compute per se as if it's nothing special.

If Async Compute isn't anything special then why recently did the developer of the upcoming Doom (2016) said they have seen big gains from Async Compute.

prevarication-small1.jpgw300
 
I knew when that rumored article came out yesterday about PASCAL, Nvidia would put something out here to stop the curiosity growing throughout the PC Gaming community by now just debunking the efficacy of Async Compute saying it's nothing special to begin with. Yea right. It has been more than 2 weeks now since the launch of Hitman, the truly 1st DX game, yet Digital Foundry and other renowned benchmarkers refuse to do a benchmark between AMD and Nvidia. Something fishy is going on.
 
Back
Top