Vega Rumors

When I was a young man, not a not so old man such as now the name Vega stood for a POS vehicle which over promised on performance and reliability on paper and underperformed big time real life....id never buy anything called a Vega = POS. With age comes wisdom, I've owned just about every generation of video card and its leading entry starting with both the 3DFX Voodoo and Rendition Vérité V1000 ....ill stick with my 1080ti...its solid, Vega will probably start burning oil get really loud and blowing black smoke and flames out the exhaust at less than the speed limit...ok I agree the flames out the exhaust were cool.
 
Kryographics Kühler für AMD VEGA
http://aquacomputer.com/newsreader/items/kryographics-kuehler-fuer-amd-vega.html

Screenshot-2017-8-30 Aqua Computer Homepage - kryographics Kühler für AMD VEGA.png
 
When I was a young man, not a not so old man such as now the name Vega stood for a POS vehicle which over promised on performance and reliability on paper and underperformed big time real life....id never buy anything called a Vega = POS. With age comes wisdom, I've owned just about every generation of video card and its leading entry starting with both the 3DFX Voodoo and Rendition Vérité V1000 ....ill stick with my 1080ti...its solid, Vega will probably start burning oil get really loud and blowing black smoke and flames out the exhaust at less than the speed limit...ok I agree the flames out the exhaust were cool.

When I was young, Vega stood for a GPU that can double-duty as a portable space heater.
 
This is nonsense if they do the same thing differently they are still different. Gasoline and electric cars both take you from point a to point b but no one would argue they are the same thing.
They don't appear like they'd be all that different. Volta moved the "performance critical" part to hardware for the same reason GCN has ACEs there in the first place. Avoiding that round trip to the CPU. Handling high level synchronization that was problematic on even Pascal because of the latency. The actual computation was never difficult, just that a CPU couldn't respond fast enough. GPU driven rendering would also benefit from not relying on the CPU for scheduling.

It's a hit against Pascal, but at least Volta should now share a similar model for programmers.

Dude its coming from a guy that thinks something that is done in software (MPS) that was to help multiple application share resources of the entire GPU, that was/is in Keplar is the same thing as instruction level dispatching LOL, its insulting our intelligence to respond to him.
Just how ACEs allow multiple processes, or independent queues, to share GPU resources. Such an easy connection to make, but whoosh!

What was that Tensor core capabilities with GCN current ALU's with a swizzle lol, that took the cake, ate the cake and shat it all at the same time. why the hell did Google make tensor cores to begin with if all GCN needed was s swizzle lol.
Tensor cores, so complicated even Google and seemingly every company looking at deep learning made one. Not like GPUs weren't designed for matrix math after all. Throwback to SIMD designs of the 80s.
 
So Raja Koduri just got back from vacation in India. He made a few posts on Twitter about the pricing of the cards.

Suddenly Gibbo's outfit has Vega 64 cards in stock for $479 euros. 23% price reduction overnight after Raja showed up.
https://www.overclockers.co.uk/powe...ress-graphics-card-stand-alone-gx-18v-pc.html

Interesting developments. Even more interesting are Raja's comments on Twitter. Warning it is a lot of tweets so be prepared to... Read.

 
So Raja Koduri just got back from vacation in India. He made a few posts on Twitter about the pricing of the cards.

Suddenly Gibbo's outfit has Vega 64 cards in stock for $479 euros. 23% price reduction overnight after Raja showed up.
https://www.overclockers.co.uk/powe...ress-graphics-card-stand-alone-gx-18v-pc.html

Interesting developments. Even more interesting are Raja's comments on Twitter. Warning it is a lot of tweets so be prepared to... Read.


Raja needs to go on a permanent vacation
 
AMD loses at least $100 on every Vega 64 card it sells at its $499 Suggested Etail Price (SEP).

http://fudzilla.com/news/graphics/44401-amd-is-losing-100-on-every-vega

Well it is something people been frowning upon :
The pricing of the HBM 2.0 memory, the packaging and substrate cost are simply too high to have a sustainable price of $499. We have mentioned this before, but Vega for AMD is not about making money. Don’t get me wrong, every company would like to make money with every product that it makes, but for AMD it is more important to win market share. First you win the market share, then you go after better ASPs (Average Selling Prices) and potentially start running a positive business.

https://semiaccurate.com/forums/sho...1-What-we-know&p=291856&viewfull=1#post291856

The funny thing on this article is statement that AMD is going for market share
biggrin.png
biggrin.png
biggrin.png
Card is priced to stupid levels to end user anyways right now, so there is no price advantage for AMD at all. How can they gain any market share in gaming sector when they can't sell any to gaming market.

And you can not ignore the fact that market share can not be gotten at the current prices Vega is selling for. So how are you going to get market share when your card is 100 of dollars more then the direct competition ?
Let me know if you figured that one out please ?
 
But according to Archaic4000 Volta is going to use GCN because it scales to the moon and back plus it is an ACE pilot or something like that
Clearly you guys don't understand this stuff at all. Living in the land of magic and fantasy. It's not at all difficult either.

You really don't think it's odd that Pascal was so good at DX12 that Volta went a complete different direction. Adopting similar hardware features that GCN has had for years. Even though GCN is the foundation of the upcoming shader model? At least have a little common sense. The only advantage Paxwell had was the register file cache and tiled raster. Both are compatible with GCN.
 
LOL don't know how he gets his ideas, figment of imagination!
Well if you read any whitepapers from AMD or Nvidia you'd know. Entry level programming courses or game design would explain it for you as well. That's the very reason you get laughed out of any technical forum where people actually know what they're talking about.
 
Well if you read any whitepapers from AMD or Nvidia you'd know. Entry level programming courses or game design would explain it for you as well. That's the very reason you get laughed out of any technical forum where people actually know what they're talking about.

Links? I love me a good nerd fight.
 
I'm amazed this guy has been allowed to shill for this long. I mean, it's not even subtle like complaining about anecdotal driver issues or posting a rumour. It's just full on lies and delusion.

On an unrelated note, I heard the next Nvidia card is going to be 20 times faster than a 1080TI and will cost just $3.50.
 
Clearly you guys don't understand this stuff at all. Living in the land of magic and fantasy. It's not at all difficult either.

You really don't think it's odd that Pascal was so good at DX12 that Volta went a complete different direction. Adopting similar hardware features that GCN has had for years. Even though GCN is the foundation of the upcoming shader model? At least have a little common sense. The only advantage Paxwell had was the register file cache and tiled raster. Both are compatible with GCN.
You don't actually think NVIDIA started working on Volta after they saw Pascals DX12 performance do you? Register File and Tiled Raster are the only advantage, well besides high clocks and generally industry defigningperformance that AMD has yet to actually match.
 
Well if you read any whitepapers from AMD or Nvidia you'd know. Entry level programming courses or game design would explain it for you as well. That's the very reason you get laughed out of any technical forum where people actually know what they're talking about.

You mean those same forums where you went on a page long tirade about tensor cores and kept droning on about tensor products and 3d arrays and you needed someone to point out to you that you didn't even understand what operations the units were performing?

Come on. Don't be silly anarchist
 
You don't actually think NVIDIA started working on Volta after they saw Pascals DX12 performance do you? Register File and Tiled Raster are the only advantage, well besides high clocks and generally industry defigningperformance that AMD has yet to actually match.

Yes volta is going to copy GCN, GCN is just too glorious for NV to pass on the opportunity
 
Yes volta is going to copy GCN, GCN is just too glorious for NV to pass on the opportunity


he is still doesn't know the difference between SIMT and SIMD, even though I stated SIMT is SIMD with thread scheduling, dude is literally stubborn as an donkey.

He doesn't know that, but whats to talk about MPS, swizzles, ACE's, async compute, and what not, crazy man.

This was why ACE's all the BS AMD spewed out don't make any difference. Then the crazy people starting picking up no hardware for scheduling instructions, crazy shit like that only come from crazy people that don't know shit about GPU's lol. Who the fuck cares if its an ACE or a gigathread engine, they both do the same shit at the end, but one does more, it does both the threads and instructions.

Mind blowing how AMD Marketing has drone slaves to do their work. Don't even know why they do it, they can get a few 1000 computers and just use their instinct cards as AI they would be smarter and more capable, oh I forgot they have no software to do that......
 
Last edited:
Links? I love me a good nerd fight.
Been traveling the past few weeks so a bit inconvenient to find and post from a phone, but GCN whitepapers, GPUOpen, and Nvidia's papers are a good start.

You don't actually think NVIDIA started working on Volta after they saw Pascals DX12 performance do you? Register File and Tiled Raster are the only advantage, well besides high clocks and generally industry defigningperformance that AMD has yet to actually match.
I think they knew towards the end of Pascal's development. The RFC is a large part of what allows the higher clocks. Keeping registers closer to execution units to facilitate the clocks without crippling high power. Avoid some of the energy burn to drive the required register accesses harder. Vega isn't that far off on actual clocks and as I mentioned before, the RFC concept required compiler work and may be the culprit here. Fix that, add in some tiled raster or hopefully TBDR and everything is good. There's a reason Raja keeps saying there is software work to do.

You mean those same forums where you went on a page long tirade about tensor cores and kept droning on about tensor products and 3d arrays and you needed someone to point out to you that you didn't even understand what operations the units were performing?
I only recall everyone, myself included, repeatedly correcting your assumptions. That the FMA operations wasn't doing what you suggested, but accumulating components of the multiplication operations separately. In the case of large Tensors you would defer combining results for efficiency. Then pointing out a typical SIMD does something similar, requiring only a small tweak to be a miraculous tensor core. It's not my fault you didn't understand it.
 
Last edited:
he is still doesn't know the difference between SIMT and SIMD, even though I stated SIMT is SIMD with thread scheduling, dude is literally stubborn as an donkey.
Well you did say that single instruction wasn't in fact single, so yeah. The very definition of the word proved you incorrect as you just repeated here. You even linked a blog and paragraphs of your own nonsense to try and back it up while failing to grasp what was actually occurring.
 
It's a hard game to properly test as overlays don't work. I saw anywhere from 120-160 fps on my GTX 1080 dpending where i was and what I was doing. Either way the main take away is that nvidia cards do perform vastly better right now then AMD.

Average 103fps by a 480 is bad. lol
 
Well you did say that single instruction wasn't in fact single, so yeah. The very definition of the word proved you incorrect as you just repeated here. You even linked a blog and paragraphs of your own nonsense to try and back it up while failing to grasp what was actually occurring.


Yeah I did link it and even stated SIMT and SIMD have the same base lol, even in what I wrote before that, but there is a big difference and if you bothered to have any reading comprehension, you would be able to understand it. Its like this you say I can't count to one, well you can't count at all. At least I'm Indian and Indians created 0. We created 0 where would you be with out that?

Didn't read the stuff in the green did ya? Thought not, just flapping your keyboarding, for the hell of it? You can't stop arguing when you are wrong?

So if nV had SIMD prior to GCN which they did, and they went to SIMT, and are further going down that road with a much more complex SIMT, they are still going towards GCN's SIMD?

WTF is that, logical fail, comprehension problems, what else, last subject is math. Can't count either?

Just because Volta has 4 sub units and GCN has 4 units per doesn't mean they are the same lol. Crazy shit right? Fiji has 4096 units Vega has 4096 units, they are the damn same chip! That is your logic in action.
 
Last edited:
Average 103fps by a 480 is bad. lol
In relation to a gtx 1060 yeah and that was on low settings at 1080p you go higher and the 480 starts falling by a lot. Also Techpowerups testing showed the gap wider. We won't get an accurate picture till after launch when everyone can better optimize their drivers.

Edit Gmaers nexus testing
destiny2-gpu-bench-1080p-highest.png
 
Last edited:
Yeah I did link it and even stated SIMT and SIMD have the same base lol, even in what I wrote before that, but there is a big difference and if you bothered to have any reading comprehension, you would be able to understand it. Its like this you say I can't count to one, well you can't count at all. At least I'm Indian and Indians created 0. We created 0 where would you be with out that?

Didn't read the stuff in the green did ya? Thought not, just flapping your keyboarding, thinking you can read but can't.

So if nV had SIMD prior to GCN which they did, and they went to SIMT, and are future going down that road, they are still going towards GCN?

WTF is that, logical fail, comprehension problems, what else, last subject is math. Can't count either?
Babylonians created zero
 
So if nV had SIMD prior to GCN which they did, and they went to SIMT, and are future going down that road, they are still going towards GCN?
At a high level of dispatch yes. At the SM level the SIMD/T is irrelevant. SIMT or SIMD+permute would be equivalent. The difference would be temporal SIMT, which Nvidia presented some papers on, which could blur the SIMD/MIMD lines a bit as instructions would repeat enough lanes could have different instructions. Vega could be doing the same, but that documentation was blank last I checked. I think Vega and Volta have some hidden behaviors we haven't seen as they are still being explored. Systolic arrays and such with temporal execution would work rather well with SIMT addressing independently. But again, Vega may have the same capability. Where the addressing resides is the only meaningful difference.
 
At a high level of dispatch yes. At the SM level the SIMD/T is irrelevant. SIMT or SIMD+permute would be equivalent. The difference would be temporal SIMT, which Nvidia presented some papers on, which could blur the SIMD/MIMD lines a bit as instructions would repeat enough lanes could have different instructions. Vega could be doing the same, but that documentation was blank last I checked. I think Vega and Volta have some hidden behaviors we haven't seen as they are still being explored. Systolic arrays and such with temporal execution would work rather well with SIMT addressing independently. But again, Vega may have the same capability. Where the addressing resides is the only meaningful difference.


AMD is very explicit in their documentation, unlike nV, so NO THEY ARE NOT DOING IT.

Again you don't have experience with these companies from a programmers point of view, I do, 99% of the time when I can't figure something out or want to know how specifically something works at an architectural level on nV cards so I can optimize better, they just say, we will do the work, they won't give you information about their GPU's, AMD on the other hand everything is spelled out in white papers and programming guides.

You keep on doing this just because another company is doing something and is getting results you think AMD can flip a switch and bham its there. No can't use a swizzle to get tensor functions in their ALU's. MPS is not ACE's, totally different, GCN is not capable of SIMT, SIMT is something nV created for themselves! GCN was the first SIMD chip AMD ever created and still using, nV used SIMD's in Fermi and went to SIMT. nV will NEVER go to a GCN like SIMD, they already used SIMD's and went away from it because they wanted more throughput and the only way they could get it is by a more complex thread scheduling, which SIMD could not give and will NEVER give because it wasn't made for that.

These architectural changes are not done by simple changes. They were done with LOGICAL progression based on previous architectures and what would be better in the future. All of them cost a BUTT LOAD of transistors so the decision to make those changes are weighed against that and the node its on.
 
Last edited:
I only recall everyone, myself included, repeatedly correcting your assumptions. That the FMA operations wasn't doing what you suggested, but accumulating components of the multiplication operations separately. In the case of large Tensors you would defer combining results for efficiency. Then pointing out a typical SIMD does something similar, requiring only a small tweak to be a miraculous tensor core. It's not my fault you didn't understand it.

How convenient, this selective memory loss must come in mighty handy when you're a loud mouthed sciolist of spectacular proportions.

In your haste to dismiss tensor cores as a banal addition to the architecture you failed to even understand that they are performing MATRIX MULTIPLY and NOT tensor products.

Please learn to read

You did not RTFM, name calling is not allowed here and this is the last time we're dealing with you. -Oldie
 
Last edited by a moderator:
How convenient, this selective memory loss must come in mighty handy when you're a loud mouthed sciolist of spectacular proportions.

In your haste to dismiss tensor cores as a banal addition to the architecture you failed to even understand that they are performing MATRIX MULTIPLY and NOT tensor products.

Please learn to read


There are nV papers why they switched over to SIMT from SIMD,

There are papers from Google why Tensor cores can do what they do, in addition to nV's Volta blog papers, slides

He didn't read/comprehend any of these AT ALL.

For him its a as easy as flipping a light switch and its done AMD GPU's can do it all, because why, they are underfunded, they lack the EE experience, they can do it by the power of their wang, AMD doesn't need to have brains or money to do anything. They can take their 8500 pro and make it work more efficient then Volta in Dx12 games even though it can't even run dx9 games.

Damn if it was that simple as to use a swizzle to do everything tensor cores can do, Google could have just gotten IP from whom ever and get it done, they wouldn't have needed to spend multiple tens of millions to make tensor cores, it would have been done in a fraction of that cost even with the IP licensing!

Every damn company that is not AMD actually even AMD by his conjectures, can't think like Anarchist because they are dumb fucks and Anarchist is superior to all.
 
Last edited:
Maybe you guys should spend a little less time flaming and defaming someone else. So you don't agree with his assertion, great ignore it and move on. It goes both ways, I see both sides skewing the arguments and being obtuse. Vega itself performs and judging by the one I own, well enough. But we have to be rational. Would I recommend buying Vega at current pricing over Nvidia alternatives... No, unless a person such as myself was only interested in AMD and desperate for an adequate upgrade. But it in and of itself is in no way a fail. I have zero complaints about its performance thus far. Hell it is nice to make the top of the charts (top 10%) in GPU benchmarks for once, this is including the new Ryzen 1800X upgrade as well.
 
Maybe you guys should spend a little less time flaming and defaming someone else. So you don't agree with his assertion, great ignore it and move on. It goes both ways, I see both sides skewing the arguments and being obtuse. Vega itself performs and judging by the one I own, well enough. But we have to be rational. Would I recommend buying Vega at current pricing over Nvidia alternatives... No, unless a person such as myself was only interested in AMD and desperate for an adequate upgrade. But it in and of itself is in no way a fail. I have zero complaints about its performance thus far. Hell it is nice to make the top of the charts (top 10%) in GPU benchmarks for once, this is including the new Ryzen 1800X upgrade as well.
I would ask want constitutes a failure in regards to Vega. I think it is a failure as far as market viability goes. It costs too and much too late. Calling something a failure is hard as it is going to be different for everyone, but I think there was a general perception that Vega was going to be AMD answer to top of the line Pascal and it can go toe to toe with a GTX 1080, but it makes massive sacrifices in heat and efficiency to do it. At this point, it seems like the only people who are likely to buy it, outside of miners, is AMD diehards who won't buy NVIDIA for whatever reason. As far as defaming anyone goes there has been a back and forth between downright delusional assertions and others, myself included, probably enjoying the show too much.
 
Maybe you guys should spend a little less time flaming and defaming someone else. So you don't agree with his assertion, great ignore it and move on. It goes both ways, I see both sides skewing the arguments and being obtuse. Vega itself performs and judging by the one I own, well enough. But we have to be rational. Would I recommend buying Vega at current pricing over Nvidia alternatives... No, unless a person such as myself was only interested in AMD and desperate for an adequate upgrade. But it in and of itself is in no way a fail. I have zero complaints about its performance thus far. Hell it is nice to make the top of the charts (top 10%) in GPU benchmarks for once, this is including the new Ryzen 1800X upgrade as well.


For a person that tells EE's and programmers to use some common sense, he better be ready to get what he stated and a whole lot more back when he is totally off the mark. Its kinda hard to ignore when he calls for it. I told him 4 months ago Volta doesn't look like GCN. Any one that reads that blog knows it doesn't look like Volta is using SIMD, yet here you have it you have him keep harping on it like AMD is the center of GPGPU development and everyone should follow, guess what nV did have a SIMD architecture 3 generations ago!

On top of this he is making assumptions (not even assumptions just out right wrong) about a software service that was available since Keplar is = ACE's, why the hell would nV or AMD hardware schedulers, yes heard it right here, not from AMD marketing, nV does have hardware scheduling for instructions, there is no way to do it without them even in Maxwell, even in Keplar, even in Fermi, even in Tesla, ever dedicate silicon to async compute or some architectures, just instruction scheduling.

Then the tensor core thing, lol. His "assertions" are so off the mark and so incorrect it makes the word "Fail" look great.

You tell me when was the last time you saw anything remotely interesting in AMD architecture design that seemed like it would top nV in GPGPU, not just by specs I mean everything. (the only one that interested me was GCN 1.0, until Keplar came out, it was just so much better at GPGPU it wasn't funny, originally, it looked like just CUDA, but it not just CUDA its the architecture that enables CUDA)

Now back to Vega

Its pretty much a failure, yeah some people are still interested in it as you stated if you like AMD (or have a freesync monitor) and its not a bad card, but its not the best you can get nor is it really competitive when all metrics are looked at.

The failure part doesn't come from its performance cause its priced accordingly. The failure part comes from its power usage and being so late. And Vega doesn't look like a forward looking architecture. First right off the bat, this card is going be bandwidth limited. quite quickly. We can see that with Eth mining, it doesn't have much extra to give in bandwidth, so all that ALU power that is left there, even if it can be used, which it doesn't look like it can be because of the scaling issues, it will definitely be blocked by its bandwidth with newer games. Secondly it has the same inherent problems all GCN GPU's have had since its inception, power hungry, poor scaling, issues with AA performance, Should check out AF performance too lol, Fiji had issues with that as well.
 
Last edited:
Back
Top