AMD Polaris Annoucment

In fairness it is both. AMD launched the 4770 on the 40nm process while the rest of the 4000 series was on 55nm to get a jump start for their next GPU revision.

Well you can do the same thing with the large chip too, just get less yields to begin with and fix the errors as you do with the smaller chips.

I don't think you will see that in generation, if we do then we are looking at a q4 or later launch for the big chips.
 
The thing people haven't even said. If that unnamed polaris card is released in 1-2 months, and has the same performance as the 950 GTX while using 54w less....and they release it first.....The reason why? OEM CONTRACT.

Can you imagine how many cheap dell/apple/acer/lenovo OEM pc's they would build while using a cheap 300w PSU with a card hardly using any power....

It's all about OEM's if you ask me......impressive. Now its time to see what Nvidia has. Have you remember Nvidia also has 14/16nm coming soon
 
The thing people haven't even said. If that unnamed polaris card is released in 1-2 months, and has the same performance as the 950 GTX while using 54w less....and they release it first.....The reason why? OEM CONTRACT.

Can you imagine how many cheap dell/apple/acer/lenovo OEM pc's they would build while using a cheap 300w PSU with a card hardly using any power....

It's all about OEM's if you ask me......impressive. Now its time to see what Nvidia has. Have you remember Nvidia also has 14/16nm coming soon


Hmm q2 is 4 months away, and we don't know if the performance is the same as the 950ti, nor this Polaris's real power consumption, because of the frame rate cap and the choice of game.

But I do think they will release both small and large chips at the same time but to different customers. As you said one for OEM (smaller chips) and one for the general consumer (larger chips)
 
amd probably has a much better shot at getting some mobile design wins this year if they release mobile polaris gpus. One of the many reasons you essentially only see low to mid end nvidia parts in notebooks with discreet graphics is because maxwell parts use so much less power because they stripped out the the added hardware scheduler amd left in for superior context switching (that could not be taken advantage of in dx11).

They will still have that hardware now but the new chips have been totally retooled for lower power so unless nvidia pulls out something crazy on the efficiency side to best amd, I expect there won't be the sort of clean sweep of nvidia mobile gpus this year.
 
Yeah rumors seem to point at mobile Polaris GPUs in 2-3 months. AMD has working silicon, which means select OEMs probably also have working silicon for AMD design wins. First and foremost I would think Apple must be very interested in lower TDP GPUs, followed by the other usual manufacturers.
 
amd probably has a much better shot at getting some mobile design wins this year if they release mobile polaris gpus. One of the many reasons you essentially only see low to mid end nvidia parts in notebooks with discreet graphics is because maxwell parts use so much less power because they stripped out the the added hardware scheduler amd left in for superior context switching (that could not be taken advantage of in dx11).

They will still have that hardware now but the new chips have been totally retooled for lower power so unless nvidia pulls out something crazy on the efficiency side to best amd, I expect there won't be the sort of clean sweep of nvidia mobile gpus this year.


nope, how do you factor in hardware scheduler for this? Do you know how much die size that would take up, I don't know about you, but I really don't have those figures, and even if I did, I wouldn't expect them to be that much. This goes back to know what you are talking about, and if you don't look into it.....

You are expecting too much form this, and this is again, see above. Both companies have the same restrictions with manufacturing nodes, so don't expect them to get any benefit over the other with that. From a design perspective, nV has been in the lead form a performance per watt side for how long? 3 generations. And we still have to consider nV's using custom libraries vs AMD which is not. So unless nV screws up something they have been doing quite well as of late, I don't see how AMD can out do them when efficiency per watt is considered. nV has been saying they are going for 2.5 to 4 times the performance in the same power envolope for Pascal for different needs of course (compute and graphics), so lets take 2.5 for graphics, that will give them the same advantage they have now against Polaris, actually it will give them more, so even if they don't hit that mark and only hit 2 times, they have a cushion.
 
Razor1--think you might be speculating too much. Just a hunch, but time will tell as architectural details flow out (but agreed on whole yields thing). If it's still very much GCN, I'm wondering if most/all the power stuff is purely transistor scaling.
 
Razor1--think you might be speculating too much. Just a hunch, but time will tell as architectural details flow out (but agreed on whole yields thing). If it's still very much GCN, I'm wondering if most/all the power stuff is purely transistor scaling.


well that's why I'm stated I probably am reading into it too much :) and it goes both ways. But if we just look at AMD's information they have stated so far and shown, we can't get any info on power consumption and performance from this chip. Out side of 30% performance increase with the same power consumption or 50-60% (made a mistake before thought they stated 70%) at the same performance from the new node. Which is quite underwhelming, but this statement you have to look into what is the ideal power consumption for the die size and for the new architecture. There are too many if's and buts about what they showed to even presume anything. There are ideal die sizes, because the 950ti isn't the most power efficient chip out of the 9x0 family, the gtx 970 and gtx 980 are, and actually the 750ti is better too.
 
I was agreeing with you, haha. :D It's really just too early and too loose a test to garner much more than it's not a space heater compared to what we presently have. It may well be a space heater relative to 14/16nm offerings.
 
I dont' think we will get space heaters hopefully lol, on the main advantage of finfet design is control of leakage, the trade off is the size of the transistor where finfet is a bit larger than the traditional transistor.
 
I dont' think we will get space heaters hopefully lol, on the main advantage of finfet design is control of leakage, the trade off is the size of the transistor where finfet is a bit larger than the traditional transistor.

Actually that's not true.

Right from TSMC:
http://www.tsmc.com/english/dedicatedFoundry/technology/20nm.htm
See here: TSMC's 20nm process technology can provide 30 percent higher speed, 1.9 times the density, or 25 percent less power than its 28nm technology.

http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm
TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology.

So 16nmFF+ is actually a few percent smaller than 20nm. This goes back to know what you are talking about, and if you don't, look into it.....


Also, the reason nVidia had the hotclock shader clock on Fermi and Tesla (can't remember if Tesla was the first or if some of the earlier designs had it too) was a die size/power trade off. You have half as many shaders running twice as fast, although this costs in power because you have to raise the voltage to raise the clock frequency and that doesnt scale linearly, power wise, as we all know.
With kepler, nVidia was able to make the opposite tradeoff, as they could put twice as many shaders going ~slower and use more die space, but the 28nm process let them do that and stay within their target die-size, thus making the chip more efficient.

:)


EDIT: As far as "pipe cleaning" goes -- it is a valid concept. As brought up before, AMD doing the 4770 on 40nm where the rest of the 4000 series was 55nm allowed them to "test" the process on a chip that was less important, and wasn't going to be pushed as hard for performance. They learned some valuable lessons doing this, I recall one of them being that they had to double up vias in the 40nm designs. This allowed AMD to release the 5850 and 5870 with good yields which allowed them to become likely some of the best price/perf cards of all time, while nVidia blundered with the Fermi woodscrew debacle, going all-in and making a big-chip 'first' on a brand new process. IIRC they only got 7 working die out of the first production run of the GF100. nVidia learned a lesson there too, and released GK104 on 28nm before later releasing bigger chips on that process. Just some fun history to remember there, heh.
 
Last edited:
Extide, I think we're almost all in agreement here, by different flavors of description. (Good stuff to add, no less!)
 
Actually that's not true.

Right from TSMC:
http://www.tsmc.com/english/dedicatedFoundry/technology/20nm.htm
See here: TSMC's 20nm process technology can provide 30 percent higher speed, 1.9 times the density, or 25 percent less power than its 28nm technology.

http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm
TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology.

So 16nmFF+ is actually a few percent smaller than 20nm. This goes back to know what you are talking about, and if you don't, look into it.....

Hmm ok, marketing material, that never seemed to materialize in a real world solution for larger chips, shows that the marketing material was just that..... This is why Intel went with Finfet with 22nm, they saw that without finfet the 22nm node wasn't a viable option for their CPU's. These people aren't dumb to waste million even billions of dollars in trying to get something to work when it definitely won't work.

And when did I say Finfet transistors get 2 times the density, I stated 4 times the density when jumping two nodes that is what didn't happen, due to the size of finfet transistor and the metal layer sizes. Hmm guess you didn't bother reading my post thoroughly enough?

Also, the reason nVidia had the hotclock shader clock on Fermi and Tesla (can't remember if Tesla was the first or if some of the earlier designs had it too) was a die size/power trade off. You have half as many shaders running twice as fast, although this costs in power because you have to raise the voltage to raise the clock frequency and that doesnt scale linearly, power wise, as we all know.
With kepler, nVidia was able to make the opposite tradeoff, as they could put twice as many shaders going ~slower and use more die space, but the 28nm process let them do that and stay within their target die-size, thus making the chip more efficient.

:)
Hot shader clock wasn't Fermi,it was dropped for Fermi, unless you were reading Charlie's rumor mill blog (yeah the wood screw thing was a fuck up) which was so of base it wasn't funny, g80 and its derivatives had hot clocks and only that one generation. The hot clock thing wasn't about die space savings either.. Actually if you look at the architecture Keplar, Fermi, g80 they were all at the reticle limit of the process they were on. All of them around the 600mm2 if not I'm not mistaken, so the ones with hot clocks vs the ones without hot clocks didn't shift the size of the gpu (using more transistors vs less, hard to say). The G80 and the g200's may have had to drop transistor density because of the hot clocks, which we don't really know, but I can see why would have too since the higher clocks will lead to more leakage)

I even called him out on all the crap he was spewing at the time over at B3D, to the point he couldn't really post there anymore and told me to post on his forum, and I stated it was a waste of my time to do that.

Edit
Sorry it was Kepler they dropped the hot clocks, forgot, its been a while and acutally thats when I stepped away from looking at graphics card 7 years ago ;) So I take that back.

BTW Tesla is a line Fermi is an architecture, two different things, don't want to confuse those two up ;), they both use the same Fermi architecture, well gen to gen tesla uses that gens architecture.
EDIT: As far as "pipe cleaning" goes -- it is a valid concept. As brought up before, AMD doing the 4770 on 40nm where the rest of the 4000 series was 55nm allowed them to "test" the process on a chip that was less important, and wasn't going to be pushed as hard for performance. They learned some valuable lessons doing this, I recall one of them being that they had to double up vias in the 40nm designs. This allowed AMD to release the 5850 and 5870 with good yields which allowed them to become likely some of the best price/perf cards of all time, while nVidia blundered with the Fermi woodscrew debacle, going all-in and making a big-chip 'first' on a brand new process. IIRC they only got 7 working die out of the first production run of the GF100. nVidia learned a lesson there too, and released GK104 on 28nm before later releasing bigger chips on that process. Just some fun history to remember there, heh.
Oh don't tell me about crap from Charlie, that is him talking right there. Again those things are so off base it wasn't funny.



Its not pipe cleaning, process maturity doesn't mean you can make any chip on the process to get the process mature, it changes based on a per chip basis. Some of the knowledge that you learn from one can be transferred to anther though. 20nm has been used for many mobile chips and still it didn't make it viable for larger performance chips like GPU's, why is that? If it was as simple as getting it working on smaller chips, and just waiting it would have been used for GPU's correct? According to what you saying its a simple as that?

Extide, I think we're almost all in agreement here, by different flavors of description. (Good stuff to add, no less!)


Oh I have no agreement with what he stated from an engineering point of view, taking Charlie's blog as a pot of gold, hell no, all of the stuff charlie pulls out of his ass, is we stated this months ago...... and even those are off the mark, just vagaries that can be spun any which way.
 
Last edited:
Hmm ok, marketing material, that never seemed to materialize in a real world solution for larger chips, shows that the marketing material was just that..... This is why Intel went with Finfet with 22nm, they saw that without finfet the 22nm node wasn't a viable option for their CPU's. These people aren't dumb to waste million even billions of dollars in trying to get something to work when it definitely won't work.

Yeah, we all know there were no 20nm 'big/fast' chips, but that wasn't the point.

And when did I say Finfet transistors get 2 times the density, I stated 4 times the density when jumping two nodes that is what didn't happen, due to the size of finfet transistor and the metal layer sizes. Hmm guess you didn't bother reading my post thoroughly enough?

You said FinFet made the transistors bigger, and that since 16FF+ is based on the 20nm BEOL, that a 16FF+ design was bigger than one in 20nm. The 2x density quote was from TSMC, comparing 16FF+ to 28nm, vs 1.9x density comparing 20nm to 28nm. So yeah, 16FF+ isnt a real 'node shrink' but it is a BIT smaller.

... jumping from 28nm, since the metal layers being similar to 20nm lithography you still get the end result of similar transistor amounts to 20nm without finfet, since the transistor density is less with finfet, this too adds the size of the silicon.


I was just saying, that is not the case.
 
Edit
Sorry it was Kepler they dropped the hot clocks, forgot, its been a while and acutally thats when I stepped away from looking at graphics card 7 years ago ;) So I take that back.

BTW Tesla is a line Fermi is an architecture, two different things, don't want to confuse those two up ;), they both use the same Fermi architecture, well gen to gen tesla uses that gens architecture.
Oh don't tell me about crap from Charlie, that is him talking right there. Again those things are so off base it wasn't funny.
Tesla was an arch as well, GTxx chips.

Its not pipe cleaning, process maturity doesn't mean you can make any chip on the process to get the process mature, it changes based on a per chip basis. 20nm has been used for many mobile chips and still it didn't make it viable for larger performance chips like GPU's, why is that? If it was as simple as getting it working on smaller chips, and just waiting it would have been used for GPU's correct? According to what you saying its a simple as that?

Nobody ever said that making a small chip allows you to make a bigger chup automatically. With 20nm the issue was leakage. That is something that cannot be fixed with a respin or different design. Yes. However there were some "gotcha's" with the 40nm process that if you didn't know about would make your yield shit, but if you did know about them you could get around them with a different design. Totally different scenarios.


Oh I have no agreement with what he stated from an engineering point of view, taking Charlie's blog as a pot of gold, hell no, all of the stuff charlie pulls out of his ass, is we stated this months ago...... and even those are off the mark, just vagaries that can be spun any which way.

Yeah Charlie spews a lot of BS and seems to have a thing against nVidia, but the fact is they messed up with fermi, and if they had more experience with the 40nm node BEFORE making GF100, it would have not had the issues it did. It's just easier and less risky to gain that experience with a smaller chip first, that is all.

Extide, I think we're almost all in agreement here, by different flavors of description. (Good stuff to add, no less!)

Yeah, I just wanted to point out some things, that's all. :)


EDIT: I'm not intending to come off as argumentative here, so sorry if that's how it seems. :)
 
Last edited:
Yeah, we all know there were no 20nm 'big/fast' chips, but that wasn't the point.

Ok, but that was what I was kinda getting at .

You said FinFet made the transistors bigger, and that since 16FF+ is based on the 20nm BEOL, that a 16FF+ design was bigger than one in 20nm. The 2x density quote was from TSMC, comparing 16FF+ to 28nm, vs 1.9x density comparing 20nm to 28nm. So yeah, 16FF+ isnt a real 'node shrink' but it is a BIT smaller.
Ah ok, I didn't mean that, sorry if you read it that way. Yes now I agree with what you are saying lol . yeah its not a full node jump its a half node.



I was just saying, that is not the case.
Its less than the same node without finfet is what I'm saying not from previous nodes.

I'm expecting a 2 fold jump in transistors for this node, from previous 28 nm chips, and the chips to remain the same size. So if that's the case, which seems likely from what Fabs are saying its not the same thing as a 2 node jump its more like a 1 node jump.

I'm sure 14nm has a bit more play here, but its nothing earth shattering.
 
I didn't take it as argumentative, but then again, I work with strongly opinionated engineers (pot, meet kettle, haha), so I might have a leg up on reading that tone. :)

All good--seems you guys hashed out the vernacular. :D
 
Tesla was an arch as well, GTxx chips.

god its been that long yeah the g80 was Tesla.

Nobody ever said that making a small chip allows you to make a bigger chup automatically. With 20nm the issue was leakage. That is something that cannot be fixed with a respin or different design. Yes. However there were some "gotcha's" with the 40nm process that if you didn't know about would make your yield shit, but if you did know about them you could get around them with a different design. Totally different scenarios.

It kinda sounds like when using words like "pipe cleaning" a simple thing to do, and once you get chips working on the process it should be simple, which isn't the case, that's why I went into the depth to explain some of the general concepts of why its done a certain way.



Yeah Charlie spews a lot of BS and seems to have a thing against nVidia, but the fact is they messed up with fermi, and if they had more experience with the 40nm node BEFORE making GF100, it would have not had the issues it did. It's just easier and less risky to gain that experience with a smaller chip first, that is all.

Actaully they didn't mess up with the process, Mac another poster here made a good point in another thread, nV had issues with Fermi's memory bus. Which the gtx 580 fixed that issue.


EDIT: I'm not intending to come off as argumentative here, so sorry if that's how it seems. :)

Arguments are ok, as long as we can see where they are coming from and get to a point of agreement :)
 
Ah ok, I didn't mean that, sorry if you read it that way. Yes now I agree with what you are saying lol . yeah its not a full node jump its a half node.

Its less than the same node without finfet is what I'm saying not from previous nodes.

I'm expecting a 2 fold jump in transistors for this node, from previous 28 nm chips, and the chips to remain the same size. So if that's the case, which seems likely from what Fabs are saying its not the same thing as a 2 node jump its more like a 1 node jump.

I'm sure 14nm has a bit more play here, but its nothing earth shattering.

Ah, looks like we are on the same line of thinking now :) And yeah, I am not sure why some articles are saying it's a full 2 node jump. I mean yeah you get a big benefit from the FinFet's themselves, but I would say closer to 1.5 -- and we need it after 5 years on 28nm.


god its been that long yeah the g80 was Tesla.
It kinda sounds like when using words like "pipe cleaning" a simple thing to do, and once you get chips working on the process it should be simple, which isn't the case, that's why I went into the depth to explain some of the general concepts of why its done a certain way.

Yeah I guess the phrase, "Gaining experience with the process" would be better than pipe-cleaning.

Actaully they didn't mess up with the process, Mac another poster here made a good point in another thread, nV had issues with Fermi's memory bus. Which the gtx 580 fixed that issue.

That's interesting -- I have never heard of that before. Although it is telling that they never released a version of GF100 with all shaders enabled.


Arguments are ok, as long as we can see where they are coming from and get to a point of agreement :)

Yeah, I just was hoping it didn't seem like I was arguing just to argue, as some people do sometimes...
 
The purpose of the demo was twofold. First, it proved that AMD already has functioning 14nm GPUs. Secondly, it showcased the power efficiency of the new 14nm process and AMD's new Polaris video cards:

Given the fact that power efficiency has suddenly become the most important metric for purchasing a video card, I expect a lot of current Nvidia owners to jump ship to AMD once Polaris cards hit the store shelves. :D

Right, they got sucker-punched by the GTX 750 Ti (aka GTX 860M/GTX 960M) two years ago: slightly better performance than AMD's high-end m280X mobile part at half the power.

This year they're not going to let that happen. We have yet to see what Nvidia will ship, but by releasing a part with the same performance and half the power of a GTX 960, they are telling OEMs that they're taking notebooks (and SFF systems) seriously for this generation.
 
Last edited:
Right, they got sucker-punched by the GTX 750 Ti (aka GTX 860M/GTX 960M) two years ago: slightly better performance than AMD's top-end mobile part (based on an under clocked 7870) at half the power.

This year they're not going to let that happen. We have yet to see what Nvidia will ship, but by releasing a part with the same performance and half the power of a GTX 960, they are telling OEMs that they're taking notebooks (and SFF systems) seriously for this generation.


If AMD is going to be launching mobile in Q2 of this year, I don't think nV will have an answer for them......
 
Another thing that I did not expect. Was working silicon. That tells me that the yields must be pretty good if they are launching in 1-2 months.

I am shocked I haven't seen anything from Nvidia. Of course they could of showed people behind closed doors.

I don't know about anymore else, but I am more impressed with the HDR monitors!
 
Another thing that I did not expect. Was working silicon. That tells me that the yields must be pretty good if they are launching in 1-2 months.

I am shocked I haven't seen anything from Nvidia. Of course they could of showed people behind closed doors.

I don't know about anymore else, but I am more impressed with the HDR monitors!


nV has had pre production silicon for Pascal for close to 6 months or more now, of course they haven't showed anything off to the general public, but people in the know have said as much.

To get something out in Q2 of this year, really should expect these IHV's to have functional chips close to a year prior to the launch.
 
nV has had pre production silicon for Pascal for close to 6 months or more now, of course they haven't showed anything off to the general public, but people in the know have said as much.

To get something out in Q2 of this year, really should expect these IHV's to have functional chips close to a year prior to the launch.

Exactly. Nvidia said they have working silicon. All PR shit people bitched about AMD did (which was totally OK to bitch about since they did paper launch the Nano....and lets not bring up other horrible PR shit they did).

I mean you would "think" Nvidia would have a working demo of Pascal at CES if they did have working silicon 6 months ago.

AMD Just tapped out what 2 months ago and had a working demo at CES.

This is why I hate PR and media announcements.
 
Exactly. Nvidia said they have working silicon. All PR shit people bitched about AMD did (which was totally OK to bitch about since they did paper launch the Nano....and lets not bring up other horrible PR shit they did).

I mean you would "think" Nvidia would have a working demo of Pascal at CES if they did have working silicon 6 months ago.

AMD Just tapped out what 2 months ago and had a working demo at CES.

This is why I hate PR and media announcements.

True but nV's leak was well before their marketing even announced it, by a member at B3D who by the by has had a great track record. I have a pretty good idea for the most part when nV had pre production silicon that was functional since the G80, the g80 was ready around the time the g70 was released and that wasn't preproduction, it was production ready silicon, after the launch of the g80 the date on the chip confirmed that. So that was actually a year and half before its release. I have no doubt that nV after stating they have had it, had for much longer before that. And that's their large chip.

What was tapped out for AMD though, first spin, second spin, we don't know which one that was, I wouldn't be surprised if it was second or third spin silicon. I would expect them to have it at least 6 months prior to mass production. At least is cutting it close too. This is why I have a tough time even clarifying things that Charlie writes, Faud finally got the hint and shut his trap lol, Charlie has to learn the same, as of late looks like he has been though, gotta wait for silly season to come around again.

When one of these companies delays a product in respect to when the other launches a new product, like more than one quarter, its not an issue with the manufacturing process, its an issue they can't compete so they are trying everything they can to get it to where they need to compete.
 
Last edited:
True but nV's leak was well before their marketing even announced it, by a member at B3D who by the by has had a great track record. I have a pretty good idea for the most part when nV had pre production silicon that was functional since the G80, the g80 was ready around the time the g70 was released and that wasn't preproduction, it was production ready silicon, after the launch of the g80 the date on the chip confirmed that. So that was actually a year and half before its release. I have no doubt that nV after stating they have had it, had for much longer before that. And that's their large chip.

What was tapped out for AMD though, first spin, second spin, we don't know which one that was, I wouldn't be surprised if it was second or third spin silicon. I would expect them to have it at least 6 months prior to mass production. At least is cutting it close too. This is why I have a tough time even clarifying things that Charlie writes, Faud finally got the hint and shut his trap lol, Charlie has to learn the same, as of late looks like he has been though, gotta wait for silly season to come around again.

yea once AMD and Nvidia release new cards, There will be many threads/bans/trolls/bitching.

I plan to stay away from the forums this time!
 
Lol.... they compared next gen hardware to a $150 old tech card? I wonder which was faster.

Good thing you aren't any taller or the point might have hit you right in the face instead of whizzing by over your head. :rolleyes:
 
Back
Top