[RUMOR] Pascal in trouble with Asynchronous Compute

I knew when that rumored article came out yesterday about PASCAL, Nvidia would put something out here to stop the curiosity growing throughout the PC Gaming community by now just debunking the efficacy of Async Compute saying it's nothing special to begin with. Yea right. It has been more than 2 weeks now since the launch of Hitman, the truly 1st DX game, yet Digital Foundry and other renowned benchmarkers refuse to do a benchmark between AMD and Nvidia. Something fishy is going on.

I didn't think there was a shortage of people pasting Hitman benchmarks everywhere. Maybe Digital Foundry doesn't care because Hitman isn't a big name title (though it is close).
 
I didn't think there was a shortage of people pasting Hitman benchmarks everywhere. Maybe Digital Foundry doesn't care because Hitman isn't a big name title (though it is close).

Digital Foundry always do launch day benchmarks of games- it's their Modus Operandi. In fact, they did a launch day benchmark of Hitman between the PS4 and Xbox One; hence, why after two weeks they still haven't gotten around to do one between AMD and Nvidia? Regardless of what ever way you look at it, it's damage-controlling.
 
The simple solution to this would be for Nvidia to release some benchmarks of their async performance. Puts a quick end to the controversy since they say they support the feature, but I don't see that happening.

This seems to be *VERY* a good reason why AMD drivers now all have access to System memory in addition to GPU memory. IE: a 280x has 3GB onboard VRAM + 6GB System RAM (pagefile type) = a total of 9GB accessible memory. This would take away some burden from the developers.
I'm kind of curious if this is becoming a DX12 thing or just an AMD thing. Treat video memory like a giant cache that happens automatically instead of just memory the way they have been. Sort of lines up with what HSA was doing, but opposite the explicit nature of the low level APIs. That would be a rather fundamental change for a developer. It's likely Nvidia already do something similar with their driver. Might have to go read up on that a bit, but last I checked things should fail when you exceed the memory capacity.
 
Bits and chips....

Might as well be called The Lisa Su blog.

When Async compute is considered a factor in gaming (in about 2 years) i'll care. Right now people should be really hoping the rumored deal with intel and AMD goes through otherwise AMD's debt will be permanent until they go into full liquidation mode.
 
1- No ASYNC COMPUTE
2- Bad drivers since win10 got released
3- Lies, Lies and more Lies. 3.5gb, Async in Drivers, 10x Pascal Performance...
4- Gimping Kepler Performance just look at Steam VR Test LOL
5- GimpWorks : AKA cancer

and BTW Pascal isnt going to have Async, everybody knows that. sooo keep waiting until volta.

original.gif


You should really go back to /g/ with that type of post.
 
Right now people should be really hoping the rumored deal with intel and AMD goes through otherwise AMD's debt will be permanent until they go into full liquidation mode.

Actually, I'm pretty curious about this, myself. If Intel buys out AMD's Radeon division, NVIDIA is in a whole world of hurt, long-term. (But then again, there were the Samsung + AMD rumors, and so on ...)
 
Actually, I'm pretty curious about this, myself. If Intel buys out AMD's Radeon division, NVIDIA is in a whole world of hurt, long-term. (But then again, there were the Samsung + AMD rumors, and so on ...)
I dont know if its so much a buy out but a licensing deal. I think intel wants to look AMD for their integrated stuff since going with AMD would be cheaper. Also think about this - the current agreement between intel and NV is set to expire next year. So having some leverage can work in intel's favor. It could also be intel wanting to get in bed with AMD because their hands are in alot of the VR stuff right now, who knows.

Whatever the case AMD needs something strong in their corner and very very soon because as an investor/day-trader not a tech person, If AMD doesn't stop hemorrhaging money, they are perma-f**ked.
 
I dont know if its so much a buy out but a licensing deal. I think intel wants to look AMD for their integrated stuff since going with AMD would be cheaper. Also think about this - the current agreement between intel and NV is set to expire next year. So having some leverage can work in intel's favor. It could also be intel wanting to get in bed with AMD because their hands are in alot of the VR stuff right now, who knows.

Whatever the case AMD needs something strong in their corner and very very soon because as an investor/day-trader not a tech person, If AMD doesn't stop hemorrhaging money, they are perma-f**ked.

It could also be Intel trying to get a better deal from Nvidia by shopping around with AMD. Intel's interest means absolutely nothing until a contract is signed.
 
I dont know if its so much a buy out but a licensing deal. I think intel wants to look AMD for their integrated stuff since going with AMD would be cheaper. Also think about this - the current agreement between intel and NV is set to expire next year. So having some leverage can work in intel's favor. It could also be intel wanting to get in bed with AMD because their hands are in alot of the VR stuff right now, who knows.

Whatever the case AMD needs something strong in their corner and very very soon because as an investor/day-trader not a tech person, If AMD doesn't stop hemorrhaging money, they are perma-f**ked.
Bunch of stuff it could be. HSA related IP for memory wouldn't be unreasonable. APU IP with the stacked die stuff coming down the road. Also those wide IO links both AMD and Nvidia have that are somewhat platform locked atm. NVLink only works on PowerPC and AMDs version obviously works on their own CPUs. Maybe there are patents related to async, but not sure how beneficial that is to Intel. And of course it may just be a negotiating tactic.

Regardless, all the lower end systems going towards SoCs definitely leaves room for useful graphics IP.
 
Intel already has IP for 3d memory and high speed interconnects, what they need is what they are licensing from nV right now.........
 
It could also be Intel trying to get a better deal from Nvidia by shopping around with AMD. Intel's interest means absolutely nothing until a contract is signed.
Thats pretty much what i meant by leverage.
 
gOOd grief. A GPU has only so many shaders; if a totally graphic operation is being processed it is rare that all the shaders are being used, meaning some are doing nothing. Async Compute allows better use of all the shaders if you also have Compute code as well as graphic code running concurrently. So if you have 10% of the shaders not being used effectively in graphics processing, the max I would say you could gain is 10% if the compute code is as effective as graphic routines and you use the idle 10% of the shaders not being used. Each operation would have varying amount of shader use with up to 40% let say of the shaders not being used.

If your program has more compute operations, the better Async will show but compute operations also use shaders meaning graphic operations will have less to use - Tuning on how many compute operations running concurrently with graphics operations to maximize performance would be key. If you have a large number of procedural textures going on, lighting calculations, other stuff using compute Async compute will probably make a nigh and day difference compared to a non-Async operations except it might just be better to do more with graphic operations and less with compute operations.

Now 20% increase with AOS with a FuryX to mean shows that the large number of shaders of Fiji is not being used effectively with just graphic operations but are available to use if you include compute operations at the same time - meaning Fiji is a more effective muli-tasking GPU.
 
Well its well know GCN utlization rates are low, but this isn't due to the ALU's not being used effectively at least not all the time, funny thing is just having a smiliar discussion on B3D

AMD: Polaris / Arctic Islands (R* 4** series) Speculation, Rumors, and Discussion

As any other architecture GCN has had tons of issues since the very release, in order to get the best out of GCN, those issues must be work-arounded by the programmers, there are all kinds of pipeline bubbles and inefficiencies due to low wavefronts occupancy during simple passes like z-prepass and shadow pass, low command processor throughput in some cases, low primitive rate, low tesselation rate, low utilization with low triangle counts, etc., etc., very broad(broad wavefronts, + several issue ports for each SIMD, LD/ST, BR, Scalar instructions) architectures like GCN suffer the most from such inefficiencies, so probably he meant that there are other things in the graphics pipeline, which could limit the performance to a greater extent than compiler's ineficiences of loop unrolling, so these things could be the primary bottleneck for GCN performance.

Fortunately, there are good compute shaders with async support which could fix some of GCN's graphics pipeline inefficiencies to some extend, unfortunately, additional programming efforts are required to hand tune async for tons of chips with different capabilities, FixedPipe/ALU/BW ratios and for tons of graphics options with tons of resolutions, which are available on PC(since bottlenecks could shift with different options/resolutions/shader modifications/ram bandwidth/etc.), while the 5% perf. outcome in some cases is still highly questionable if you think of more general and usefull algorithmic level optimizations like Intel does in their research papers.

I'd prefer to see more optimisations on algorithmic level, like Sebbbi does, rather than all this Async stuff to fix pipeline inefficiencies of GCN, there are 30% gains from async in some corner cases, but they are really corner cases, which have nothing to do with real life, I don't buy something like The Children of Tomorrow as a viable example of 25% gains, which is applicable for normal games, because it's the game with very simple geometry and slow geometry shaders, which are used for CR => there must be very low utilization with such simple geometry on GCN + geometry shaders, a lot of resources should stay available for something like asynchronous compute. Hopefully, developers will switch their attention back to all users of PC market and do better algoritmic level optimisations, rather then continue to waste time on GCN specific optimisations only


And here ya go Anarchist, divergence from AMD is less of an issue than for nV's Maxwell 2, what was that again? LOL gotta love it, actually learning from people that know what they are talking about.
 
Well its well know GCN utilization rates are low, but this isn't due to the ALU's not being used effectively at least not all the time, funny thing is just having a smiliar discussion on B3D

AMD: Polaris / Arctic Islands (R* 4** series) Speculation, Rumors, and Discussion




And here ya go Anarchist, divergence from AMD is less of an issue than for nV's Maxwell 2, what was that again? LOL gotta love it, actually learning from people that know what they are talking about.

I am glad you brought this up because I been telling people since the Ashes of the Singularity's Beta came out last year that what we are seeing is full utilization of AMD's cards. Alot of PC Gamers don't know that overall, AMD's cards have more computational power than Nvidia's but Nvidia, especially under DX11, utilizes their GPUs more so in alot of DX11 games but now under DX12/Vulkan, AMD is finally showing its prowess.

And now, Polaris has a geometry processor along with the Async Compute Engines and Command Processor so you will definitely see optimal GPU utilization. I told people last year that AMD was playing Chess while Nvidia was playing checker. AMD has the Gaming market sewn up with PS4, Xbox one, Apple's iMac and now their new licensing agreement with Intel next year.

Tiago Sousa on Twitter
 
Too many if and buts Moorish, Business is always a chess game, we have seen AMD/ATi doing it well in the past, but never lasts long, unfortunately this last gen, it took them way to long to get their cards out, that hurt, is more hurt than AMD/ATi has ever taken, and will take a long time heal. The console market has very little influence on the PC market, as developers who target consoles tend not to target PC's, although there are games that come up on both consoles and PC, those dev teams have much more experience with low level API, unless they outsource to smaller dev's.

And yes Moorish, I stated that close to 8 months ago as well, AMD has put much of their eggs into what is going on now, but I highly doubt that will help them, unless they sell off their graphics division, (unless Zen is good), the CPU division is in very bad shape, their GPU division, is not much better right now. Losing marketshare goes much faster than gain marketshare. I do expect AMD to gain marketshare on the GPU side around 10% this coming quarter well the quarter following polaris's release (discreet and mobile). But that is not enough from keeping AMD from cutting their losses down enough.


You say chess vs checkers, it doesn't matter what game you are playing because both those games there are many different strategies that work each game but both game players need to understand what is going on and what is about to happen, and what they can do to stop that from happening. I take more of a look from Sun Tzu, know your enemy and know yourself, win a 1000 battles, the dynamics of business change but adaption to change is much more important than the plan itself, AMD's lack of funds, is something they are trying to account for now and it is getting to be a heavy burden. If you take either Sun Tzu's or the 9 rings, both those books and look at AMD, Intel, nV and how they change based on pressures from each other, you will see AMD is and always has been at the short end of the stick, and this is due to resources, they didn't do what they had to do when they had the resources, and then when they don't have the resources, they take risks that are over played.
 
Last edited:
Well its well know GCN utlization rates are low, but this isn't due to the ALU's not being used effectively at least not all the time, funny thing is just having a smiliar discussion on B3D

AMD: Polaris / Arctic Islands (R* 4** series) Speculation, Rumors, and Discussion

And here ya go Anarchist, divergence from AMD is less of an issue than for nV's Maxwell 2, what was that again? LOL gotta love it, actually learning from people that know what they are talking about.
Damn good thing I never made anything close to that claim then. You were the one that said divergence mattered, I said it was insignificant and unrelated. That's why you have zero credibility on this shit. You make argument A on someone's behalf(strawman), provide evidence for unrelated argument B(ADHD), then go further off topic with argument C. Someone points out you're wrong then you proceed to have a meltdown on the forums. You've been stalking me for days now, running your mouth completely unable to make any viable points beyond putting words in my mouth and trying to prove that wrong. You've yet to provide any evidence whatsoever that anything I actually said was inaccurate. In fact as more evidence becomes available even my speculation seems to be proven accurate. Everyone is free to see this shit and can make up their own mind. You seriously need some help and are only making yourself look a fool. If you want to actually have an adult discussion then start acting like one.
 
Damn good thing I never made anything close to that claim then. You were the one that said divergence mattered, I said it was insignificant and unrelated. That's why you have zero credibility on this shit. You make argument A on someone's behalf(strawman), provide evidence for unrelated argument B(ADHD), then go further off topic with argument C. Someone points out you're wrong then you proceed to have a meltdown on the forums. You've been stalking me for days now, running your mouth completely unable to make any viable points beyond putting words in my mouth and trying to prove that wrong. You've yet to provide any evidence whatsoever that anything I actually said was inaccurate. In fact as more evidence becomes available even my speculation seems to be proven accurate. Everyone is free to see this shit and can make up their own mind. You seriously need some help and are only making yourself look a fool. If you want to actually have an adult discussion then start acting like one.


GCN has less inherited problems to divergence, Maxwell has more, this is because of the way the ALU's are setup on each of the cards. And if you want me to quote what you stated, I will, I remember it as night and day. You specifically stated, divergence doesn't matter and it was purely in scientific models where it might cause problems, when I stated it does in game situations, because certain shaders on nV will cause those exact problems. I'm not stalking you, yes I am running my mouth, but I'm not going out of my way to post specially about you, just happens to happen that previous conversations that I had with you, things are popping up now that shows I was talking about things that were correct.
 
Last edited:
Too many if and buts Moorish, Business is always a chess game, we have seen AMD/ATi doing it well in the past, but never lasts long, unfortunately this last gen, it took them way to long to get their cards out, that hurt, is more hurt than AMD/ATi has ever taken, and will take a long time heal. The console market has very little influence on the PC market, as developers who target consoles tend not to target PC's, although there are games that come up on both consoles and PC, those dev teams have much more experience with low level API, unless they outsource to smaller dev's.

And yes Moorish, I stated that close to 8 months ago as well, AMD has put much of their eggs into what is going on now, but I highly doubt that will help them, unless they sell off their graphics division, (unless Zen is good), the CPU division is in very bad shape, their GPU division, is not much better right now. Losing marketshare goes much faster than gain marketshare. I do expect AMD to gain marketshare on the GPU side around 10% this coming quarter well the quarter following polaris's release (discreet and mobile). But that is not enough from keeping AMD from cutting their losses down enough.


You say chess vs checkers, it doesn't matter what game you are playing because both those games there are many different strategies that work each game but both game players need to understand what is going on and what is about to happen, and what they can do to stop that from happening. I take more of a look from Sun Tzu, know your enemy and know yourself, win a 1000 battles, the dynamics of business change but adaption to change is much more important than the plan itself, AMD's lack of funds, is something they are trying to account for now and it is getting to be a heavy burden. If you take either Sun Tzu's or the 9 rings, both those books and look at AMD, Intel, nV and how they change based on pressures from each other, you will see AMD is and always has been at the short end of the stick, and this is due to resources, they didn't do what they had to do when they had the resources, and then when they don't have the resources, they take risks that are over played.

You said the consoles market has little influence on the PC market.......if it wasn't for AMD giving the consoles X86 architecture, we wouldn't be having all of these games now. Why you think all of the 3rd-party and 2nd-party games are now over to the PC since the advent of the PS4 and Xbox One? AMD has made PC Gaming development much easier from a developer's standpoint by giving the consoles X86 architecture.

Developers always program real low for consoles so standardizing consoles to PC will also make them program low for PC like they used to back in the late 90's with 3dfx and whatnot. Think about it for a couple of seconds: why would developers utilize Async Compute knowing that it makes Nvidia LOSE performance instead of gaining performance especially given Nvidia's current market share? It's because of the consoles so the consoles have alot of influence over PC Gaming; in fact, they are the ones that slowing everything down just look at that comment from that Ubisoft's developer about the Division awhile back.

Nvidia is really in a tough spot right now and they are going to have to rely on the Gaming Media's propaganda to get them out of it. As more PC Gamers are finally realizing about Async Compute and how Nvidia, including PASCAL, can't do it thus their cards losing performance, PC Gamers are going run in droves over to AMD. I even question will Volta even have Async Compute Engines built in it. Nvidia was definitely caught flat-footed with all of this.

Not to mention the potential lawsuit coming at Nvidia for lying to their customers, particularly the 980Ti's ones, about Maxwell supporting Async Compute coupled with losing their licensing agreement with Intel, which according analysts would cost them 30% of their revenue, in 2017; hence, Nvidia is in trouble.

Couldn't happen to a better company as far as I am concerned because Nvidia gets away with alot of crap that the Gaming Media give them a free pass on.

3024181-2975775886-30241.jpg
 
Last edited:
You said the consoles market has little influence on the PC market.......if it wasn't for AMD giving the consoles X86 architecture, we wouldn't be having all of these games now. Why you think all of the 3rd-party and 2nd-party games are now over to the PC since the advent of the PS4 and Xbox One? AMD has made PC Gaming development much easier from a developer's standpoint by giving the consoles X86 architecture.
You must be new to PC gaming or just not have a clue what you're talking about. PC has had console ports LOOOONG before PS4 ever existed. This is nothing new.
 
You said the consoles market has little influence on the PC market.......if it wasn't for AMD giving the consoles X86 architecture, we wouldn't be having all of these games now. Why you think all of the 3rd-party and 2nd-party games are now over to the PC since the advent of the PS4 and Xbox One? AMD has made PC Gaming development much easier from a developer's standpoint by giving the consoles X86 architecture.

Why do you say that, there has always been cross platform games since the first xbox and PS. That hasn't changed one iota. Not to mention development for consoles is much different than developement for PC's, the API's have been the most burden and even with low level API's for PC's this hasn't changed either. We have seen bad ports from Console to PC in the last few months.

Developers always program real low for consoles so standardizing consoles to PC will also make them program low for PC like they used to back in the late 90's with 3dfx and whatnot. Think about it for a couple of seconds: why would developers utilize Async Compute knowing that it makes Nvidia LOSE performance instead of gaining performance especially given Nvidia's current market share? It's because of the consoles so the consoles have alot of influence over PC Gaming; in fact, they are the ones that slowing everything down just look at that comment from that Ubisoft's developer about the Division awhile back.

Actually this isn't the case, most console teams don't use low level interactions, a few do, it takes a lot of experience to do low level programming. This is one reason why Sony looks at your dev team to see if they have the experience to program for their console and also from a business point of view to make a compelling enough game for them to market for their console, as does MS.

Just call up Sony *they have a much more stringent oversight of who can get dev systems (they will tell you you need x amount of years of experience per developer, and a certain amount of PC titles launched*, even MS looks at your product too but not a strongly.

I'm not sure what Ubisoft developer stated about the division. I'm guessing it is referring to they have maxed out on consoles, yeah that is normal, that happens every generation of consoles. That doesn't stop games developers making better looking games on PC's does it?

Nvidia is really in a tough spot right now and they are going to have to rely on the Gaming Media's propaganda to get them out of it. As more PC Gamers are finally realizing about Async Compute and how Nvidia, including PASCAL, can't do it thus their cards losing performance, PC Gamers are going run in droves over to AMD. I even question will Volta even have Async Compute Engines built in it. Nvidia was definitely caught flat-footed with all of this.

nV architecture is far more efficient from a ALU point of view, but there are other weaknesses that will affect them, one of them is scheduling for Async, but this can be over come with experienced programmers. We have no idea of anything about pascal from a pipeline, scheduling point of view outside of rumors that pop up on websites, that link something that nV is doing, like releasing Gameworks to open source and async, which is just ludicrous.

I don't know how old you are or when you started gaming, or got into the video card scene, but just look back at the g80, did anyone think it was going to be a unified shader system? I didn't, I found out a month and half before release it was, just check my B3D posts about it. It was shocker. Everyone thought it was a pumped up 7800 architecture.

ALU efficiency is not everything, there are many aspects to the GPU that we aren't even considering. Also PC gaming every new generation of AMD or nV architecture, async code changes. Its difficult to migrate code from one architecture to another even same IHV GPU's over different gens. We have seen that in games released with Async lately. Developers have even talked about this at GDC from both camps, and a third Intel. This is actually quite an old problem for any type of low level code actually. While many dev's like the new low level API's, they have also stated this is going be a problem for other developers that don't have this experience. So in the short term this is a hurdle that has to be overcome. I'm sure it will, but there will be growing pains. And short term is pretty much years. It takes time to develop engines, and then more time to make the games on them.


Not to mention the potential lawsuit coming at Nvidia for lying to their customers, particularly the 980Ti's ones, about Maxwell supporting Async Compute coupled with losing their licensing agreement with Intel, which according analysts would cost them 30% of their revenue, in 2017; hence, Nvidia is in trouble.

They support async there is no way around it, they wouldn't have been able to get the dx12 stamp if they didn't, that is the rhetoric as others have been saying and really should stop saying because its not even a question anymore. How well it supports it is a different matter. Maxwell 2 has more scheduling overhead when doing certain things over GCN.
 
Last edited:

I specifically said the word ALL. Before the PS4 and Xbox one, only a few 3rd party games, much less 2nd-party games, came on the PC. PC had to cross their fingers hoping they will get AAA title games, now it's expected that all 3rd party AAA-title games will be out on the PC.
 
I specifically said the word ALL. Before the PS4 and Xbox one, only a few 3rd party games, much less 2nd-party games, came on the PC. PC had to cross their fingers hoping they will get AAA title games, now it's expected that all 3rd party AAA-title games will be out on the PC.

? yeah that isn't right.....

If you want me to expand upon this PM me, I will send you an NDA, and I can go into depth with talks with publishers that I had.
 
? yeah that isn't right.....

C'mon dude, let's not act childish you dealing with a grown man, so you are telling me that ever since AMD gave X86 architecture to the consoles (PS4 and Xbox one), you haven't noticed that now-a-days PC gamers are getting all of the 3rd party and 2nd-party games? I mean even WWE 2K finally came out on the PC.
 
you are looking at it from the out side, sorry had a ninja edit in there, trust me, on this, Publishers who are interested in consoles don't give a crap about PC games, Its always the developer that has the interest they are the ones that push for the PC versions and usually they have separate teams working on the PC version and spend their own money, and get a better cut of the profits by doing so. The publishers interest is their pocket, and PC versions of a console game tend to hurt their bottom line. Mind you not all, but most.
 
you are looking at it from the out side, sorry had a ninja edit in there, trust me, on this, Publishers who are interested in consoles don't give a crap about PC games, Its always the developer that has the interest they are the ones that push for the PC versions and usually they have separate teams working on the PC version and spend their own money, and get a better cut of the profits by doing so. The publishers interest is their pocket, and PC versions of a console game tend to hurt their bottom line. Mind you not all, but most.

There isn't any such thing as Publishers who interested in consoles. Publishers are interested in money, bottom-line. And with game development being standardized with all platform having X86 architecture, publishers are going to give out little more budget money to developers for them to make games on the PC as well especially given that PC is mostly digital and not having overhead fees like the consoles do.
 
Maybe I should start making vague claims and unsupported predictions on a forum. I may then be the 'source' of a completely baseless internet article.


Pascal will be 120% faster than an elephant.


-gasp!-
 

Why do you say that, there has always been cross platform games since the first xbox and PS. That hasn't changed one iota. Not to mention development for consoles is much different than developement for PC's, the API's have been the most burden and even with low level API's for PC's this hasn't changed either. We have seen bad ports from Console to PC in the last few months.



Actually this isn't the case, most console teams don't use low level interactions, a few do, it takes a lot of experience to do low level programming. This is one reason why Sony looks at your dev team to see if they have the experience to program for their console and also from a business point of view to make a compelling enough game for them to market for their console, as does MS.

Just call up Sony *they have a much more stringent oversight of who can get dev systems (they will tell you you need x amount of years of experience per developer, and a certain amount of PC titles launched*, even MS looks at your product too but not a strongly.

I'm not sure what Ubisoft developer stated about the division. I'm guessing it is referring to they have maxed out on consoles, yeah that is normal, that happens every generation of consoles. That doesn't stop games developers making better looking games on PC's does it?



nV architecture is far more efficient from a ALU point of view, but there are other weaknesses that will affect them, one of them is scheduling for Async, but this can be over come with experienced programmers. We have no idea of anything about pascal from a pipeline, scheduling point of view outside of rumors that pop up on websites, that link something that nV is doing, like releasing Gameworks to open source and async, which is just ludicrous.

I don't know how old you are or when you started gaming, or got into the video card scene, but just look back at the g80, did anyone think it was going to be a unified shader system? I didn't, I found out a month and half before release it was, just check my B3D posts about it. It was shocker. Everyone thought it was a pumped up 7800 architecture.

ALU efficiency is not everything, there are many aspects to the GPU that we aren't even considering. Also PC gaming every new generation of AMD or nV architecture, async code changes. Its difficult to migrate code from one architecture to another even same IHV GPU's over different gens. We have seen that in games released with Async lately. Developers have even talked about this at GDC from both camps, and a third Intel. This is actually quite an old problem for any type of low level code actually. While many dev's like the new low level API's, they have also stated this is going be a problem for other developers that don't have this experience. So in the short term this is a hurdle that has to be overcome. I'm sure it will, but there will be growing pains. And short term is pretty much years. It takes time to develop engines, and then more time to make the games on them.




They support async there is no way around it, they wouldn't have been able to get the dx12 stamp if they didn't, that is the rhetoric as others have been saying and really should stop saying because its not even a question anymore. How well it supports it is a different matter. Maxwell 2 has more scheduling overhead when doing certain things over GCN.

You are way off the mark. Consoles, for the most part, always try to go low after a while because it's fixed hardware thus is has hardware limitations so they squeeze out the most juice as possible. For example, look at the PS3 with its Cell architecture; hence, developers weren't accustomed to it so they developed real high for it at first with 3rd party games but as years proceeded on with them getting more familiar with it they started to program real low thus games like the Last of Us, etc. AMD knew that developers usually go low-level with the consoles so if they standardize the consoles to the PC, they will go low-level with the PC as well; hence, Async Compute.

Context switching, which is what Nvidia has, is not tantamount to Async Compute, which is what Nvidia doesn't have. If Nvidia has Async Compute support would they be allowing all of this bad publicity to come out about their cards especially right before the launch of a new series?

Nvidia lied to their customers, false advertisement, and now roosters are coming home to roost. They are losing performance on DX12, not gaining it and PC Gamers are very curious on why. If you have a Nvidia, you are pretty much have to upgrade now given the lost of performance on DX12 and what IHV has Async Compute Engines- Advanced Micro Devices, that's who.
 
Seems no one but you are seeing this correlation with AMD equating to more games available on PC... I think it's in your head personally. Perhaps driven by AMD fanboyism it seems.

Then there's the line "if you have nvidia, you pretty much have to upgrade" sensationalism at it's finest... Or worst depending on your perspective. Chill out on the AMD reach arounds, they won't be returning the favor.
 
Seems no one but you are seeing this correlation with AMD equating to more games available on PC... I think it's in your head personally. Perhaps driven by AMD fanboyism it seems.

Even shills like Geoff Keighley said back in 2013 at E3 when talking about how the consoles will be having X86 architecture then the PC will get all of the 3rd party games now. If a non-gamer/shill like him knows this and you don't, then probably you just some Nvidia shill, that's all.
 
Even shills like Geoff Keighley said back in 2013 at E3 when talking about how the consoles will be having X86 architecture then the PC will get all of the 3rd party games now. If a non-gamer/shill like him knows this and you don't, then probably you just some Nvidia shill, that's all.

Like I said, you're the only one here seeing it. It's in your head, you not admitting it to yourself won't change anything. You like AMD, we get it, and there's nothing wrong with that. You don't like nVidia, we get that, and that's fine too. Giving credit to a phenomenon that doesn't exist to AMD on the other hand... There's a word for that, it's called being delusional. And hey, that's ok too, just as long as you're ok with it being brought to your attention.
 
Like I said, you're the only one here seeing it. It's in your head, you not admitting it to yourself won't change anything. You like AMD, we get it, and there's nothing wrong with that. You don't like nVidia, we get that, and that's fine too. Giving credit to a phenomenon that doesn't exist to AMD on the other hand... There's a word for that, it's called being delusional. And hey, that's ok too, just as long as you're ok with it being brought to your attention.

Maybe you just paid shill just like the rest of the Gaming Media. PASCAL will be 10X faster than Maxwell......yea, I believe that one.
 
Maybe you just paid shill just like the rest of the Gaming Media. PASCAL will be 10X faster than Maxwell......yea, I believe that one.

You said "maybe" which means you have some doubt. You're seeing things that aren't there. Of that there is no doubt.
 
Pascal was touted as being 10x faster in a specific scenario, not all scenarios. That was a presentation about training neural networks.
 
Last edited by a moderator:
A lot of back and forths for a thread that started with '[RUMOR]'

Even if it turns out to be true, I'll believe real world performance across games I *want to play* before making my next GPU purchase decision.

No amount of ASync arguing is going to change what we know: we don't know much.
 
Bits and chips....

Might as well be called The Lisa Su blog.

When Async compute is considered a factor in gaming (in about 2 years) i'll care. Right now people should be really hoping the rumored deal with intel and AMD goes through otherwise AMD's debt will be permanent until they go into full liquidation mode.
Nice off topic post. You must not have anything to say about nVidia not supporting a feature they said they did?
 
I expect Pascal to do Async Compute effectively. Will it be as good as GCN Arch - who knows except Nvidia.

It really depends on the workload and how a game or program is programmed if it will take advantage of Async Compute ability of a GPU. AMD indicated that doing compute operations without Async compute gives you stalls to reset the GPU for a compute operation then back to a graphic operation plus not all graphic operations will use all available shaders (decreasing efficiency). Async compute helps in getting more efficiency out of the GPU in short. It is not magical nor does it automatically give you more performance - it may be more effective just to do all the work with graphic programming vice graphic and compute operations with Async Compute GPU's. So less stalls (flushing of buffers etc.) and more shaders actively being used => maybe better performance ;)

Also something like Directcompute in DX12 takes a period of time to develop killer efficient programs for use in games, so this should improve as well plus compute libraries becoming available will help (maybe Nvidia will have some GameWorks DirectCompute stuff that works effectively with their hardware in the future).

As for most of this thread, it is mostly non-sense - when the hardware comes out - take it for a spin and not on some imaginary wishful track.
 
Now for AMD versus Nvidia GPU hardware, I say AMD GCN Arch has been fairing better then Nvidia's including Maxwell (already) . Is that due to luck of the straw or better design choices and some luck with convincing the industry to go more your way? Plus AMD ability to overcome GCN problematic issues (it is more complex) via drivers and programmer experience with it. Now will AMD finally make some money? That is the real question because everything else is mute once you go out of business.
 
Pascal was touted as being 10x faster in a specific scenario, not all scenarios. That was a presentation about training neural networks.
Yes they made it quite clear that it was in regards to deep learning. Unfortunately we have too many clueless people that like to stir up shit though so it keeps being taken out of context.
 
You are way off the mark. Consoles, for the most part, always try to go low after a while because it's fixed hardware thus is has hardware limitations so they squeeze out the most juice as possible. For example, look at the PS3 with its Cell architecture; hence, developers weren't accustomed to it so they developed real high for it at first with 3rd party games but as years proceeded on with them getting more familiar with it they started to program real low thus games like the Last of Us, etc. AMD knew that developers usually go low-level with the consoles so if they standardize the consoles to the PC, they will go low-level with the PC as well; hence, Async Compute.

Context switching, which is what Nvidia has, is not tantamount to Async Compute, which is what Nvidia doesn't have. If Nvidia has Async Compute support would they be allowing all of this bad publicity to come out about their cards especially right before the launch of a new series?

Nvidia lied to their customers, false advertisement, and now roosters are coming home to roost. They are losing performance on DX12, not gaining it and PC Gamers are very curious on why. If you have a Nvidia, you are pretty much have to upgrade now given the lost of performance on DX12 and what IHV has Async Compute Engines- Advanced Micro Devices, that's who.


Dude the first question a publisher asks after seeing a demo or video of your game and if they like it, is what are your target platforms, most publishers won't even look at dev that is making a PC only game. If they go to round 2 of the interview process of selecting a dev. They will push the dev to make it a console version only if they have the experience and knowledge. Then finally once they negotiations and done and the dev signs their life away, the publisher will get your team together, and start splitting it up based on experience and then get the right people in to ensure you are able to get the "console experienced" team together.

LOl do you even know what context switching is?
 
Back
Top