AMD admits its Radeon RX 5700 series price cuts were a trap for Nvidia

Yes thank you ! Nvidia is a-hole because they basically raised the the cost of GPU for no reason other than they could at the time. Now they're a F"K because their chip is too big and has no head room and AMD has a much smaller chip that in some benchmarks matches the performance of their $700 card. So recap AMD new gpu scale up Nvidia's doesn't and Nvidia cards cost alot more produce so they can't cut cost. So the AMD story is totally plausible.

Literally none of this is correct, and I don't know why you quoted me.

Bad assumption how? Serious here, as everything points to it being cheaper, every deep dive I have read, every insider talking about it who mentions costs per die or wafer also say it will be cheaper for AMD, some suggest it's significantly so, while others suggest it's only marginally cheaper. No where have I see anyone suggest it would cost more.

We can't just apply simple math- we don't know what TSMC is charging Nvidia for wafers, nor do we know what working GPUs cost Nvidia. Nvidia could be paying less, and likely would be, as they're pretty bullish as a company. That's just basic negotiation.

Price has to do with

A lot of things. I get your line of thought and agree that it applies, I just see it as incomplete- and we'll never get the complete picture. The argument that Nvidia is paying more, and the argument that they're paying less, are both supportable with available evidence and reasoning.
 
I feel the 290X is unfairly mentioned in your post given that at release it was cheaper and faster than the 780 and much cheaper while still competitive with the Titan. (And [H]s own review showed that at 4K it was consistently faster than the Titan apples to apples.)

Was it cheap? Hell no. But it wasn’t some hugely priced gouging product.

You know what? You're right. I was thinking that the 780Ti was $650 at the time of the 290X's release, but it wasn't released yet. My mistake!
 
  • Like
Reactions: N4CR
like this
Literally none of this is correct, and I don't know why you quoted me.



We can't just apply simple math- we don't know what TSMC is charging Nvidia for wafers, nor do we know what working GPUs cost Nvidia. Nvidia could be paying less, and likely would be, as they're pretty bullish as a company. That's just basic negotiation.



A lot of things. I get your line of thought and agree that it applies, I just see it as incomplete- and we'll never get the complete picture. The argument that Nvidia is paying more, and the argument that they're paying less, are both supportable with available evidence and reasoning.


my guess is cost would almost be a wash, at least with the reference vs reference cards.. AMD tends to overbuild their cards on the component side while Nvidia tends to run just good enough as far as component specs.. i think the ones that get the short end of the stick on the nvidia side is their board partners that pay out the ass per GPU, especially when nvidia was doing the non A spec and A spec chips.. i have a feeling it's much cheaper per gpu for AMD board partners.

as far as TSMC goes nvidia probably pays less per wafer due to being a bigger purchaser but also have to consider how many gpu's they get per wafer.. where as AMD's getting many more gpu's per wafer. although since neither company nor TSMC has stated what their actual yields are per wafer as far as i know it's hard to tell what the losses are on both sides.

i still think it's plausible that there was always a plan to drop the price if nvidia tried to race them to release SUPER before them but i'm sure if nvidia didn't lower their prices AMD was totally fine with staying at their original MSRP they announced. either way smart marketing move on their part.
 
We can't just apply simple math- we don't know what TSMC is charging Nvidia for wafers, nor do we know what working GPUs cost Nvidia. Nvidia could be paying less, and likely would be, as they're pretty bullish as a company. That's just basic negotiation.



A lot of things. I get your line of thought and agree that it applies, I just see it as incomplete- and we'll never get the complete picture. The argument that Nvidia is paying more, and the argument that they're paying less, are both supportable with available evidence and reasoning.

And I can agree with all of that. But I don't think it's a "bad" assumption, it follows everything we know, and is what people far closer to the inside also are saying (but disagree on how much it seems). The assumption could still be wrong for sure, but I don't think it is a bad one. I am also not saying they *do* have more pricing headroom, but that I would think so, at least until I see someone or something pointing in the other direction. Always open to reading more on the topic, however deep topics in PC HW seem to be more and more rare with only a few places offering solid insights.

NV does have a better negotiation point, but that is far harder to gauge than the other factors at play. It's like NV offering MUCH better cooling solutions, something I am not sure AMD could get away with even if they didn't go with the blower for price, as it would probably upset and undercut any partner cards, as that's one of the main reasons to get one, that being better cooling and higher clocks.
 
my guess is cost would almost be a wash, at least with the reference vs reference cards.. AMD tends to overbuild their cards on the component side while Nvidia tends to run just good enough as far as component specs.. i think the ones that get the short end of the stick on the nvidia side is their board partners that pay out the ass per GPU, especially when nvidia was doing the non A spec and A spec chips.. i have a feeling it's much cheaper per gpu for AMD board partners.

as far as TSMC goes nvidia probably pays less per wafer due to being a bigger purchaser but also have to consider how many gpu's they get per wafer.. where as AMD's getting many more gpu's per wafer. although since neither company nor TSMC has stated what their actual yields are per wafer as far as i know it's hard to tell what the losses are on both sides.

i still think it's plausible that there was always a plan to drop the price if nvidia tried to race them to release SUPER before them but i'm sure if nvidia didn't lower their prices AMD was totally fine with staying at their original MSRP they announced. either way smart marketing move on their part.

I like the idea that nVidia charges board partners more for the chip. I could imagine that.

Pure BOM wise I think nVidia and AMD are about awash. They have similar transistor counts, even though AMD doesn’t support RT, and transistor per $$$ is similar as far as I can tell between the nodes.
 
Yes thank you ! Nvidia is a-hole because they basically raised the the cost of GPU for no reason other than they could at the time. Now they're a F"K because their chip is too big and has no head room and AMD has a much smaller chip that in some benchmarks matches the performance of their $700 card. So recap AMD new gpu scale up Nvidia's doesn't and Nvidia cards cost alot more produce so they can't cut cost. So the AMD story is totally plausible.

Your rant is pretty much self contradicting.

First you complain that NVidia raised prices for no reason at all.

Next you claim their chips are too big and expensive, so they can't cut costs.

Those both can't be true.
 
This is pure AMD spin.

Nvidia has already been increasing the accessibility of RTX as yields improve, well before Navi was officially announced in May. This started with the RTX 2060 back in January, and now the Super 2060 (basically the 760 Kepler refresh) gives you 2070 power for $100 off.

AMD's Navi 2700 series is a tech dead-end, which will be on the market for another two to three years. They were forced to cut prices at launch, or be forgotten entirely.

And while AMD will have much higher yields 12 months from now, and start cutting the price of the 5700 series down to the $250 range, Nvidia will be able to do the same with their Samsung 7nm EUV Ampere parts (due next year).

Nvidia knows you have to keep adding RTX to the rest of your lineup (plus increase top-end performance to get more game devs interested,) so expect most GPUs in Ampere to have full support. Where exactly will that leave AMD's little Navi? AMD took 3 years to replace Polaris 10, so expect this to be on the market as AMD's sole midrange card until 2021-2022.

Navi may have accelerated the release of Super, but it was eventually going to be released. Nvidia has always added full support for a their most expensive new features within 2 years (Geforce2 MX, and FX 5200), so it was coming soon enough.
 
Last edited:
Even though I feel that NVIDIA needs to be taken down a peg, the only way to logically see this is as spin. AMD would lose more than they gained from this story if it was true and they had secret strategies to bait NVIDIA, that would be terrible poker. AMD only benefits from saying this if pricing was a reaction and they want people to believe it was planned all along. The only other possibility is that the exec ran his mouth and will not be heard from again.
 
Still wondering why they didn't develop a better cooler for it like Nvidia did, they had the time.
 
Still wondering why they didn't develop a better cooler for it like Nvidia did, they had the time.

Easy answer. AMD needs partners happy. They need not to increase msrp cost. For few years nvidia has milked the founders edition stuff.

I don't think AMD cares too much about sales of reference cards down the road. They probably need partners to make some money on the mid range cards. May be you will see better cooler on the higher end Navi, but even then I don't think AMD is going to want to step on partners. They would have to go the founders edition route, charge premium. They can only do this if they are dominating the market.

The only valid complain here is not have partner cards out the same time as reference. But AMD did say on reddit that they do see the partner cards coming out earlier a valid point and will look to improve that in the future and that has been great community feedback.
 
Nvidia has already been increasing the accessibility of RTX as yields improve, well before Navi was officially announced in May. This started with the RTX 2060 back in January, and now the Super 2060 (basically the 760 Kepler refresh) gives you 2070 power for $100 off.

Yes, $100 off for a 2070 that given history with their chip naming schemes is nothing but a 2060 priced at xx80 levels. The original 2060 being a cut down 2060ti at xx70 level prices. The 2070 Super is actually a cut down 2080. How magnanimous of Nvidia. The RTX 2060's shelf life was effectively 6 whole months. Of course nvidia is still going to sell them apparently. I'm seeing a price shift across their new entry level line by $30 for some of the 1660(ti) models at least.
 
It was discussed in an AMD & The Full Nerd interview.
AMD talks Ryzen 3000 and Radeon 5700 series launch | The Full Nerd special edition on Jun 27, 2019
Jump to the 46 minute mark.

TLDR
It's prevents returns and complaints from clueless dweebs who stuff one of these monsters into unventilated boxes.

But NVidia stopped with blower designs, so how come that isn't a problem for them? Less clueless dweebs buying NVidia? ;)

Seriously though, it sounds like more spin to me.
 
Gotta say Lisa Su proves not just smart, but a very avid business person.
Think this, AMD knows they will play second fiddle for a while so what the need to do is make products with a greater profit margin potential. They just did that. This gives them room to breathe, and actually allows them to take advantages of nvidia high prices.
No 7nm will not be magic for nvidia necessarily.. it can be delayed, it can yeild low, specially if the chips are still fat asses ( they are unless re- done) nvidia is having their vega moment in many ways, the 2000 series gained very little but price and a useless feature. I mean really cu per cu did the 2000 gained anything vs the 1000 series?
 
You totally forgot how nvidia charged $100 premium for dropping blower designs for few years. it came with a cost.

That has nothing to do with the point. If you need blowers because users are clueless, why doesn't NVidia also have to stick to blowers?
 
I mean really cu per cu did the 2000 gained anything vs the 1000 series?
Maybe? I mean the 2080 does have 10 fewer SM/CU than the 1080ti. It does have higher clocks but are clocks really enough to cover the deficit of 640 Cuda Cores? I don’t know I ain’t an engineer.
 
Gotta say Lisa Su proves not just smart, but a very avid business person.
Think this, AMD knows they will play second fiddle for a while so what the need to do is make products with a greater profit margin potential. They just did that. This gives them room to breathe, and actually allows them to take advantages of nvidia high prices.
No 7nm will not be magic for nvidia necessarily.. it can be delayed, it can yeild low, specially if the chips are still fat asses ( they are unless re- done) nvidia is having their vega moment in many ways, the 2000 series gained very little but price and a useless feature. I mean really cu per cu did the 2000 gained anything vs the 1000 series?

Maybe? I mean the 2080 does have 10 fewer SM/CU than the 1080ti. It does have higher clocks but are clocks really enough to cover the deficit of 640 Cuda Cores? I don’t know I ain’t an engineer.

IIRC Turing is better than Pascal in these ways:

- Cuda Cores ~15% more performance (split FP16 and FP32 pipelines to run in parallel, Improved effective memory bandwidth)
- NVENC greatly improved, really helps streamers or people that encode with it.
- Ray Tracing
- DLSS

AMD is very attractive right now if you don’t care about ray tracing. If nVidia ignored RT, AMD would be in their normal 1-2 years behind position.

I haven’t seen a clock to clock comparison for AMD vs their last gen, but nVidia has been having 15-30% gains gen to gen ignoring the node changes. AMD has a lot of catching up to do to be in a solid position (long term).
 
IIRC Turing is better than Pascal in these ways:

- Cuda Cores ~15% more performance (split FP16 and FP32 pipelines to run in parallel, Improved effective memory bandwidth)
- NVENC greatly improved, really helps streamers or people that encode with it.
- Ray Tracing
- DLSS

AMD is very attractive right now if you don’t care about ray tracing. If nVidia ignored RT, AMD would be in their normal 1-2 years behind position.

I haven’t seen a clock to clock comparison for AMD vs their last gen, but nVidia has been having 15-30% gains gen to gen ignoring the node changes. AMD has a lot of catching up to do to be in a solid position (long term).
DLSS Can pretty much? be scratched off as AMD created an algorithm that works very well for image quality improvement... AMDs doesn't require developer implementation, you just toggle on/off. It works with dx9 , 12, vulkan, .. not dx 11 though.
 
AMD actually grinning/boasting about a price reduction is some epic self-own. Congrats - you just reduced your ASP before even hitting the market lmao.

I wouldn't consider it a self own at all.

They knew they could not only afford the current pricing... it was always the intended to market price. Their die size is massively smaller. Yes they are on a more expensive node... however. Reports are their 7nm yields are fantastic... so turning out somewhere around 50% more chips per wafer with very little waste means big profits. Rolling navi out backwards. Small die first big die later. Looks like it may pay off very nicely.

NV on the other hand turn out one wafer with very expensive parts. The best fully functioning chips end up in quadro parts. The rest go to 2080/70/60. That is sort of the way NV and AMD have been making GPUs for a long time now. Design one massive high end part be it Turing Pascal or Vega ect. Skim the cream for the professional market, whatever is left over goes into the high end consumer parts and all the less then 100% functional chips get used for the mid and low end bits. 6 months after launch you find away to spin a cheaper wafer based on the same part but with no high end potential... and sell those in the mid and low range parts. (super is that wafer for Turing)

By rolling out navi backwards... with the little die going first. Their yields are higher, their chips per wafer are higher... and they have no professional parts released on little navi to skim the cream of the wafers for. Pretty much a perfect situation for a bit of price trolling. I'm sure they had a good idea NV had a new wafer spin ready for the low-mid range Turing arch. They got NV to set their pricing on it based on a BS release price. Then went to their intended full margin price. NV is now in the position of having to clear 2060/70 stock vs aggressive AMD pricing. They also priced their new "super" mid range wafer parts above AMDs 5700s.

This move was only a self own if their margins do in fact suck. I sort of doubt that is the case. The 5700s have a 251mm vs supers 545mm die size. Depending on the source 7nm looks to be 20-30% more expensive. However assuming the industry standard 300mm wafers, NV is only looking at a max of around 120 chips per wafer. Even with a small defect rate they are probably scrapping 10-20 chips per wafer. So 100 chips per wafer aprox... of which only a few are going to be high margin pro quality pristine quality chips. Where as AMD is going to turn out 270-280 chips per wafer (its more then double as they should have less waste on the ends of the wafer). Counting defects and not having to skim the crop for any pro level parts, I'm sure they are still well north of 200 working 5700 chips per wafer. That puts AMD in a very good position margin wise to play games with pricing.
 
Last edited:
That's a lot of rationalization that still doesn't speak to voluntarily reducing retail price before hitting store shelves.

Good yields or not, that's why I said it was a self-own. Now better than expected yields would soften the blow, sure...but I'm still not seeing a positive here.

If they could have sold their product without a price cut, I'm sure AMD would have preferred that.

Consider also that Nvidia likely could've increased their pricing, had the AMD parts underperformed. Now we do know how well the 5700XT clocks, challenging the 2080 at times.

You can't tell me that AMD doesn't wish they were selling their parts for 2080 prices for the last 10 months. Think is, AMD's GPU release cadence has been consistently several months behind, allowing Nvidia to stay unchallenged and commanding that higher ASP.
 
Last edited:
That's a lot of rationalization that still doesn't speak to voluntarily reducing retail price before hitting store shelves.

Good yields or not, that's why I said it was a self-own. Now better than expected yields would soften the blow, sure...but I'm still not seeing a positive here.

If they could have sold their product without a price cut, I'm sure AMD would have preferred that.

Well if AMD knew to hit 50% margin they needed to sell at the pricing they are AT right now. Then announcing a higher price to get your competition who you KNOW has a much higher cost on silicon then you do to get them to commit on pricing... is hardly a self own.

They only owned themselves if the original announced price was their 50% margin point. Based on what they have said... and what we can clearly gather through some simple math on released die sizes. I tend to believe them. We know that Turing is a very expensive part to MFG. In Nvidias defense... its easy to complain about them upping the pricing on their GPU lineup, however they designed a 500+mm behemoth of a chip. Saying they would be turning out 100 working chips per wafer is being generous assuming only a 10% or so defect rate. The yield of 2080ti class chips has got to be in the single digits at best per wafer. So the bump in consumer level pricing was probably needed to maintain margin where it was.

It sounds like at this market point anyway. AMD can hold a near equal margin with NV and beat them soundly on pricing. I think AMD new if they faked NV out on price NV wasn't going to respond with lower pricing on their super parts. No one is willing to drop their margins first. I guess at the end of the day we don't know Nvidia or AMDs margins on these things for sure. Neither is going to start breaking down sales number per part for investors either. ;) lol
 
I should disclose that I was previously invested in AMD, which is probably why I'm particularly critical when they give up revenue of any kind.
 
Their die size is massively smaller. Yes they are on a more expensive node... however. Reports are their 7nm yields are fantastic... so turning out somewhere around 50% more chips per wafer with very little waste means big profits. Rolling navi out backwards. Small die first big die later. Looks like it may pay off very nicely..

Getting really tired of this bogus tiny die size argument when on a MUCH more expensive process.

Even AMD is stating that 7nm is very expensive, and they seem to be using Navi as their example, note this is cost per yielded mm2 on a Navi sized die. Note how it shoots up massively at 7nm.



amd_die_cost_increase_per_nm_improvement.jpg


To my eye that graph (from AMD no less) is showing about 1.8X as expensive as 14/16nm.

Now looking at transistor density, NVidia is at 25 million/mm2 and AMD is at 41 million, that is only 1.6X as dense.

If anything, that puts NVidia die costs for similar transistor count LOWER than AMDs.

Lets just call this one a wash.
 
AMD actually grinning/boasting about a price reduction is some epic self-own. Congrats - you just reduced your ASP before even hitting the market lmao.

Having contingency response, is NOT a trap. Make no mistake, AMD would much rather have not needed to use that contingency as it cut a chunk out of their profit margin..

Some people, before this was announced by AMD, claimed that AMD priced their cards deliberately high to have room for price cuts based on how Nvidia priced their super cards. Previous generations Nvidia always spoiled AMD launches by launching faster products and lowering the price on previous top end like the 780Ti and 780 when the 290 cards launched. Or Nvidia would do like they did with 980Ti, wait until AMD release price and details of their card (Fury x) and then release the 980ti at the same price just before hand, totally ruining AMD's launch.

This time AMD played it smarter, gave out false prices and then dropped to the real prices after Nvidia launched the super cards.

This speculation was on another thread and people said that we probably would never know if this was what happened.

Now We actually find out that AMD did play it that way and you are all saying it's just spin by AMD? Why is it spin? Look at Polaris launch or Vega Launch for examples. There was no reduction of prices despite having worse performing cards and in Vega's case a year later. Remember the moaning about price when Polaris was announced at $199 for the 4GB version, when everyone thought it was going to be $199 for the 8GB version. Vega, well, no need to remind anyone about the high launch price there.

The point I am making is that Lisa Su has shown that she is not willing to sacrifice margins to sell cards. I would well believe that what AMD said in the statement above is exactly what they planned. A small victory.

And finally, You both mention AMD taking a chunk out of their profit margins. How do you know what their margins are? If they planned this, then, no it hasn't taken anything out of their margins. All they have done is drop the prices to the ones they were going to release them at all along, margin included.
 
Getting really tired of this bogus tiny die size argument when on a MUCH more expensive process.

To my eye that graph (from AMD no less) is showing about 1.8X as expensive as 14/16nm.

Now looking at transistor density, NVidia is at 25 million/mm2 and AMD is at 41 million, that is only 1.6X as dense.

If anything, that puts NVidia die costs for similar transistor count LOWER than AMDs.

Lets just call this one a wash.

Excellent plot!

Also note that this is only the centerpiece of silicon. It doesn't include the rest of the board. And even if it did include the entire board, that is nothing more than the hardware COGS. It doesn't cover hardware/manufacturing development costs, software/firmware dev costs, spends on developer support, operational costs, marketing, distribution, support, etc etc. Actual hardware COGS is, at most, 20% of the story and the center silicon is likely half of that or less. The real margin differences between red and green come from the rest.
 
Getting really tired of this bogus tiny die size argument when on a MUCH more expensive process.

Even AMD is stating that 7nm is very expensive, and they seem to be using Navi as their example, note this is cost per yielded mm2 on a Navi sized die. Note how it shoots up massively at 7nm.



View attachment 175425

To my eye that graph (from AMD no less) is showing about 1.8X as expensive as 14/16nm.

Now looking at transistor density, NVidia is at 25 million/mm2 and AMD is at 41 million, that is only 1.6X as dense.

If anything, that puts NVidia die costs for similar transistor count LOWER than AMDs.

Lets just call this one a wash.

I haven't seen AMD say anywhere that 7nm is costing them more then previous fabs. Setup costs may be higher but fabrication costs per billion transistors is drastically lower.

Your proof of high costs is I believe from a sky is falling article I have read that claims 3nm will never happen cause it will cost to much. I remember the same arguments when the market was still on 22nm... then 10nm would never happen cause physics and cost. Ditto at 65nm... 32nm would cost too much and anything lower was physically impossible. People where making those types of arguments long before we where measuring fab sizes in nano meters.

Its easy to find graphs that argue one way or the other. Bottom line is setup costs for 7nm is expensive and so is 16nm+++ aka 12nm. However fabrication cost per billion transistors is DOWN. When it comes to GPUs which are pretty much the largest dies anyone presses. That cost per billion transistors is what is important. Its why nvidias die is twice the size. The biggest disadvantage to massive dies is the rate of failure. Physics can't be beat there. There will always be a % of defects on a wafer. Pressing a perfect 100% operational wafer has likely never happened on any process... ok perhaps one in a couple millions. So like lottery winners. :) At a equal number of defects on a very good wafer NV is going to junk a lot more chips based on nothing but the number of dies on the wafer. They are also going to have trouble turning out 100% functional parts as physical defects obviously have much more surface area to effect.

16nm has a cost per billion transistors of $4.98
10nm cost drops to $3.81
I would assume NVs 12nm chips fall somewhere in between those costs per billion.
7nm cost per billion transistors drops to $2.65

Based on that I would guess;
2060 super (TU106) has a cost of ($4.40 x 10.8) $47.52
2070 super (TU104) has a cost of ($4.40 x 13.6) $59.84
5700xt/5700 has a cost of (2.65 x 10.3) $27.30
2080 ti (TU102) has a cost of ($4.40 x 18.6) $81.84

Factor in the Fabs cut.... and I think that is likely pretty close assuming standard yield issues, and no issues. All reports every ones fabs are working at expected fail rates. Now its possible NV has negotiated some nice deals from the fabs on the older process. Perhaps the fab is only charging 30 points instead of 50 or something. Still to equal fab costs (not counting setups and R&D) they would need to be getting a screaming sub 20% deal.

So no I doubt highly its a wash. Setup costs are no doubt higher on 7nm, however I tend to think the more advanced 12nm process NV is using costs much more to setup vs the standard 16nm fabs as well. 2060ti chips likely cost NV quite a bit more... and the TU104 chips used in the 2070 super likely don't leave to much room for NV to drop pricing much from where it is now.
 
Last edited:
everyone on this thread has gone full retard ! Nvidia's RTX cards when they were released were priced much higher than the previous generation can we all agree ? Therefore Nvidia set the price for the GPU market for this generation being higher than the previous agree? Part of the reason why RTX cards cost more is because Nvidia's chips huge and cost more to produce this is a fact go look it up. Amd's new RDNA navi 7nm gpu is much smaller die and will scale up much easier than Nvidia's gpu? What part of this do you not understand or have you completely lost all common sense?
 
everyone on this thread has gone full retard ! Nvidia's RTX cards when they were released were priced much higher than the previous generation can we all agree ? Therefore Nvidia set the price for the GPU market for this generation being higher than the previous agree? Part of the reason why RTX cards cost more is because Nvidia's chips huge and cost more to produce this is a fact go look it up. Amd's new RDNA navi 7nm gpu is much smaller die and will scale up much easier than Nvidia's gpu? What part of this do you not understand or have you completely lost all common sense?

First: You keep flip flopping on your argument:
Before: "Nvidia is a-hole because they basically raised the the cost of GPU for no reason other than they could at the time"
Now: "Part of the reason why RTX cards cost more is because Nvidia's chips huge and cost more to produce this is a fact go look it up."

It helps to figure out your position, if you actually have a consistent, coherent one. You contradict yourself between posts and even within the same post.

Second: you seem to have a hard time understanding that die pricing is based on size and cost/mm2. AMD themselves admit that 7nm cost cost/mm2 is about 1.8 times as much as 14/16nm, which renders the size argument completely moot.
 
I'm not flipping around at all. I'm just point out that Nvidia's pricing when RTX was release last year the prices where much higher than their previous generation ? what don't you understand? I feel like iam talking to a retard?

Your flip-flopping was on the reason for the higher price, first you said they were assholes for raising the price for "no reason other than they could at the time".

Now they need high prices because there dies are expensive. It should be obvious that No reason and a very good reason (expensive die) are contradictions.

I know some people have irrational hatreds for some companies that makes them want to attack them for whatever they do, but you don't get to attack them for both having "no reason" for raising prices, and then attack their market position by claiming they have too high manufacturing cost to compete.

Your NVidia hate is interfering with your thinking.
 
I think we are all confusing R&D and production asset costs with actual wafer costs.

7nm is drastically cheaper per billion transistors. This is a fact. It has been true of every single shrink in the history of tech. AMDs chart is clearly including a ton of costs that are not specific wafer costs. The first companies to jump on a new process pay the highest %s as the fabs pay for all their fancy new machines.

AMD has an advantage in that regard as they are purchasing not only GPU wafers but CPU ones as well. I have no idea what the end costs to AMD are, no one but the bean counters at AMD know that.

Without question though 7nm is cheaper then 16nm per transistor. There is no getting around the fact that at 7nm you can turn out almost twice as many chips from the same 300mm hunk of silicon. Of course its a new node that requires new mfg hardware and extra work during the design phase. No doubt though once your stamping silicon costs are lower.

Yes Nvidias price bump for the RTX line was BS... not because there costs didn't go up. More that they sold consumers(gamers) a bill of goods about RTX features being designed for gamers. The truth is and I think even Nvidias hardcore beliebers are starting to wake up to the fact... Turing like Volta was designed for AI / Server markets. With Volta NV didn't even bother trying to turn out a consumer version... they knew the performance bump would be negligible. With Turing thanks to some interesting enhancements to their tensor engine they where able to market it at least as a gaming part. And hey NV has been pretty good at getting the game industry to jump when they have wanted so it wasn't a bad bet.

AMD and NV right now are producing cards in a completely opposite fashion. Nvidia is producing massive die chips with 100% functional parts ending up in high end parts... with cast offs becoming the mid range parts. Basically 2070 and 60 parts are salvages. That is the way everyone has done it for ages. It seems AMD this time out however have started with what they are calling little navi. They are not trying to fab parts with 19 billion transistors like a 2080ti and selling the cast offs as lower end models. Instead they have designed a 10 billion transistor part selling them all into 2 skus.

There is zero doubt that it is cheaper to product navi chips. It will be interesting when NV moves to 7nm... I have a feeling NV will do the same thing and start with a little ampere. Nvidia has been showing off their deep learning chiplet research chips already. When they move to 7nm it would be logical to move the tensor(RTX) stuff to its own chiplet. The next gen of GPUs is going to be a lot more interesting then this one has been. Ampere vs XE vs rdna should make 2020 pretty interesting. Its possible just like back in the good ole days (90s) we will have 3 very different GPU techs on the market. They will all also be 7nm parts. :)
 
Starting small and going large is not a new concept.

Nvidia did exactly that with Maxwell. The first Maxwell parts were the 750 and 750ti despite the rest of the 700 line being Kepler. The 900 series followed about 8 months later with Maxwell proper.

It worked well for Nvidia I can see it working well enough for AMD too, but I’m sure there is a reason Nvidia isn’t still going that method. Or maybe there isn’t Iunno.
 
  • Like
Reactions: ChadD
like this
Your flip-flopping was on the reason for the higher price, first you said they were assholes for raising the price for "no reason other than they could at the time".

Now they need high prices because there dies are expensive. It should be obvious that No reason and a very good reason (expensive die) are contradictions.

I know some people have irrational hatreds for some companies that makes them want to attack them for whatever they do, but you don't get to attack them for both having "no reason" for raising prices, and then attack their market position by claiming they have too high manufacturing cost to compete.

Your NVidia hate is interfering with your thinking.


What was the intial prices for the RTX 2080 vs GTX 1080 etc. is it not much higher ?
 
I think we are all confusing R&D and production asset costs with actual wafer costs.

7nm is drastically cheaper per billion transistors. This is a fact. It has been true of every single shrink in the history of tech. AMDs chart is clearly including a ton of costs that are not specific wafer costs. The first companies to jump on a new process pay the highest %s as the fabs pay for all their fancy new machines.

AMD has an advantage in that regard as they are purchasing not only GPU wafers but CPU ones as well. I have no idea what the end costs to AMD are, no one but the bean counters at AMD know that.

Without question though 7nm is cheaper then 16nm per transistor. There is no getting around the fact that at 7nm you can turn out almost twice as many chips from the same 300mm hunk of silicon. Of course its a new node that requires new mfg hardware and extra work during the design phase. No doubt though once your stamping silicon costs are lower.

Yes Nvidias price bump for the RTX line was BS... not because there costs didn't go up. More that they sold consumers(gamers) a bill of goods about RTX features being designed for gamers. The truth is and I think even Nvidias hardcore beliebers are starting to wake up to the fact... Turing like Volta was designed for AI / Server markets. With Volta NV didn't even bother trying to turn out a consumer version... they knew the performance bump would be negligible. With Turing thanks to some interesting enhancements to their tensor engine they where able to market it at least as a gaming part. And hey NV has been pretty good at getting the game industry to jump when they have wanted so it wasn't a bad bet.

AMD and NV right now are producing cards in a completely opposite fashion. Nvidia is producing massive die chips with 100% functional parts ending up in high end parts... with cast offs becoming the mid range parts. Basically 2070 and 60 parts are salvages. That is the way everyone has done it for ages. It seems AMD this time out however have started with what they are calling little navi. They are not trying to fab parts with 19 billion transistors like a 2080ti and selling the cast offs as lower end models. Instead they have designed a 10 billion transistor part selling them all into 2 skus.

There is zero doubt that it is cheaper to product navi chips. It will be interesting when NV moves to 7nm... I have a feeling NV will do the same thing and start with a little ampere. Nvidia has been showing off their deep learning chiplet research chips already. When they move to 7nm it would be logical to move the tensor(RTX) stuff to its own chiplet. The next gen of GPUs is going to be a lot more interesting then this one has been. Ampere vs XE vs rdna should make 2020 pretty interesting. Its possible just like back in the good ole days (90s) we will have 3 very different GPU techs on the market. They will all also be 7nm parts. :)


Thank you for explaining my point exactly !
 
I think we are all confusing R&D and production asset costs with actual wafer costs.

7nm is drastically cheaper per billion transistors. This is a fact. It has been true of every single shrink in the history of tech. AMDs chart is clearly including a ton of costs that are not specific wafer costs. The first companies to jump on a new process pay the highest %s as the fabs pay for all their fancy new machines.

AMD has an advantage in that regard as they are purchasing not only GPU wafers but CPU ones as well. I have no idea what the end costs to AMD are, no one but the bean counters at AMD know that.

Without question though 7nm is cheaper then 16nm per transistor. There is no getting around the fact that at 7nm you can turn out almost twice as many chips from the same 300mm hunk of silicon. Of course its a new node that requires new mfg hardware and extra work during the design phase. No doubt though once your stamping silicon costs are lower.

Yes Nvidias price bump for the RTX line was BS... not because there costs didn't go up. More that they sold consumers(gamers) a bill of goods about RTX features being designed for gamers. The truth is and I think even Nvidias hardcore beliebers are starting to wake up to the fact... Turing like Volta was designed for AI / Server markets. With Volta NV didn't even bother trying to turn out a consumer version... they knew the performance bump would be negligible. With Turing thanks to some interesting enhancements to their tensor engine they where able to market it at least as a gaming part. And hey NV has been pretty good at getting the game industry to jump when they have wanted so it wasn't a bad bet.

AMD and NV right now are producing cards in a completely opposite fashion. Nvidia is producing massive die chips with 100% functional parts ending up in high end parts... with cast offs becoming the mid range parts. Basically 2070 and 60 parts are salvages. That is the way everyone has done it for ages. It seems AMD this time out however have started with what they are calling little navi. They are not trying to fab parts with 19 billion transistors like a 2080ti and selling the cast offs as lower end models. Instead they have designed a 10 billion transistor part selling them all into 2 skus.

There is zero doubt that it is cheaper to product navi chips. It will be interesting when NV moves to 7nm... I have a feeling NV will do the same thing and start with a little ampere. Nvidia has been showing off their deep learning chiplet research chips already. When they move to 7nm it would be logical to move the tensor(RTX) stuff to its own chiplet. The next gen of GPUs is going to be a lot more interesting then this one has been. Ampere vs XE vs rdna should make 2020 pretty interesting. Its possible just like back in the good ole days (90s) we will have 3 very different GPU techs on the market. They will all also be 7nm parts. :)


Right exactly NV will have to move to a new design ( 7nm) or whatever to compete with Navi. Because their current chip is way too big. that was my entire point.
 
That has nothing to do with the point. If you need blowers because users are clueless, why doesn't NVidia also have to stick to blowers?

Because they could make more. As simple as that and they were in a position to do just that with no competition.
 
The 7970 was a bit slower than the 680 at much louder noise and Temps. Plus it was $50 more.
It was a faster card and OCd better though. Mine was not much slower than an AIB 290X lol.
Getting really tired of this bogus tiny die size argument when on a MUCH more expensive process.

Even AMD is stating that 7nm is very expensive, and they seem to be using Navi as their example, note this is cost per yielded mm2 on a Navi sized die. Note how it shoots up massively at 7nm.



View attachment 175425

To my eye that graph (from AMD no less) is showing about 1.8X as expensive as 14/16nm.

Now looking at transistor density, NVidia is at 25 million/mm2 and AMD is at 41 million, that is only 1.6X as dense.

If anything, that puts NVidia die costs for similar transistor count LOWER than AMDs.

Lets just call this one a wash.
There is another chart showing it's a wash, per mm2 the cost is higher but the mm2 needed is lower for 7nm.
 
Back
Top