AMD CEO Confirms 7nm Navi High-End Radeon RX Graphics Cards and 4thGen Ryzen CPUs For Notebooks

if Navi can undercut without RTX features its a win for AMD. Its not hard to significantly undercut the market when nVidia has pushed prices up so significantly. I still don't think they will, but it is the one instance they likely could.

That's the thing: this will be a new, high-end release from AMD, and Nvidia can counter by simply dropping prices and marketing RTX. Big Navi without hardware RT wouldn't be DOA as a product in a vacuum, I have no doubt that it will be very nice, but against Nvidia they have do something besides price lower- and if they're not going to be feature competitive, expect uptake to be extremely slow.
 
The "high end" graphics card market has priced its self out of its own market IMO. I use to upgrade graphics cards every year starting with a few Voodoo cards and then the Geforce 2 Pro... at some point I started skipping a generation because the performance jumps werent as big. Eventually it got to the point where I have pretty much stopped buying graphics cards while I wait for something significantly better in the mid range.

A lot of you are saying things like "Nvidia has the high end"... and... to that I say, so what. The Steam hardware survey paints a pretty picture of mid range cards dominating the top spots.

Edit: Its just now to the point where I feel like it really makes sense to upgrade my system as a whole. Its been a while (system in sig).
 
If you're already paying ~US$700+, what's another US$50 to be current on features?



Then you're not interested, but many are- again, we're talking above the 5700XT, not in that space or below it.
Im interested because the market is highly determined by the top-bottom. If AMD can put out rtx 2080ti speeds and lower the price of entry it's good news for me and others who don't care for features that are barely usable. And while you say many are, I see a lot more on here (even those with the 2080ti) saying it's a good card because raw performance, they don't care about the rtx. I'll wait until it's mainstream and comes down in price and can maintain high performance. Same reason I didn't use 4x AA when it first hit (or most other new features), performance hit was to much. It doesn't make it a bad feature, it just wasn't usable in the first cards except if you played slower fps and/or lower resolutions. Same thing here, cool feature, and will eventually be cool, for now, meh. It's not that I can't afford a 2080ti, it's just not worth wasting my money on it. I'd rather buy 3-4 5700's and update a couple of my PCs ;).
Heck, I used to get presale hardware from ati and Nvidia (with an new of course) to work on newer technologies (like parallax mapping)... It's not that I don't like these things (I've written a software raytracer years ago, so I know how difficult and expensive this is), it's just that I weigh my cost/benefit before I buy thinvs. If AMD can get me rtx2080ti speeds for $800, I'd probably make the jump, but not to $1k+, regardless of raytracing support.
 
Last edited:
'Good' or 'bad' here isn't about me liking it- if AMD doesn't get more competitive in the GPU space, they're going to get crushed. That I wouldn't like, but it seems like a road they're determined to tread.



With the 5700, I almost agree- with the 5700XT, I expect the Nvidia competitors to age better.

Big Navi? Dead on arrival without hardware RT.

And that's not based on whether there are games or not; that's based on the games coming and Nvidia shipping the hardware. If AMD doesn't, their parts are going to be ignored.

While I think most of us would like to see AMD compete like for like with NVIDIA they don't have to to make a profit. They can own the low-mid tier and still be successful.

Big Navi is not remotely DOA without RT. The only way a 2080ti competitor (if they can even make one) is DOA is if it doesn't have RT AND priced similarly AND has similar performance. Plenty of people would buy a 2080Ti sans RT if it was 20%-30% less expensive. Or would pay the same if it offered 20%-30% more FPS.

I'd be much more inclined to go 4k and HDR before I ever worry about RT. Also didn't AMD detail they would be looking at server-side RT not discrete on chip RT? I remember reading something about that.
 
While I think most of us would like to see AMD compete like for like with NVIDIA they don't have to to make a profit. They can own the low-mid tier and still be successful.

Big Navi is not remotely DOA without RT. The only way a 2080ti competitor (if they can even make one) is DOA is if it doesn't have RT AND priced similarly AND has similar performance. Plenty of people would buy a 2080Ti sans RT if it was 20%-30% less expensive. Or would pay the same if it offered 20%-30% more FPS.

I'd be much more inclined to go 4k and HDR before I ever worry about RT. Also didn't AMD detail they would be looking at server-side RT not discrete on chip RT? I remember reading something about that.

Of course, they have to make a profit. They aren't going to spend a hundred million dollars developing a monster GPU chip and sell it at no profit. Get real.

IMO we won't see Big Navi without Ray Tracing Hardware. Big Navi will need a 2+ year lifespan, and over that time, the importance of RT will only grow, making a high end chip without RT, more and more irrelevant. IMO it would be a stupid move to release Big Navi without RT HW, and Lisa Su isn't stupid.

In hypothetical land it isn't strictly DOA without RT HW, but pricing and timing are VERY tricky. Without RT HW, it would have to launch very soon, undercut NVidia significantly, and hope NVidia didn't significantly price cut 2080Ti. But really this won't happen.
 
Last edited:
Of course, they have to make a profit. They aren't going to spend a hundred million dollars developing a monster GPU chip and sell it at no profit. Get real.

IMO we won't see Big Navi without Ray Tracing Hardware. Big Navi will need a 2+ year lifespan, and over that time, the importance of RT will only grow, making a high end chip without RT, more and more irrelevant. IMO it would be a stupid move to release Big Navi without RT HW, and Lisa Su isn't stupid.

In hypothetical land it isn't strictly DOA without RT HW, but pricing and timing are VERY tricky. Without RT HW, it would have to launch very soon, undercut NVidia significantly, and hope NVidia didn't significantly price cut 2080Ti. But really this won't happen.
AMD RT Patent was filed in Dec 22 2017. Nearly two years ago. Which is about how long it takes to get a chip from a theoretical, simulated and patentable design to early silicon. Navi has already proven me wrong so I ain't going to bet anything but it is possible big Navi could support it. And it would make sense, instead of like Nvidia releasing most of their cards being poorly equipped for high quality DXR, AMD could just keep such functionality to high end cards where it belongs.
And one thing people also usually overlook when comparing 5700 xt and competitors is die size, XT is only 251mm² - '7nm' process tech is pretty small. So to make a 'big Navi' could still be 2060 size (445mm² or 77% larger) but high end.
 
And one thing people also usually overlook when comparing 5700 xt and competitors is die size, XT is only 251mm² - '7nm' process tech is pretty small. So to make a 'big Navi' could still be 2060 size (445mm² or 77% larger) but high end.

One thing people constantly overlook, when the making the "small die" argument on 7nm, is that it costs nearly twice as much for 7nm silicon area, making that argument just about entirely moot.

If you want to start comparing, compare the transistor counts in play, instead of die size, because that is a better comparison point even across process, as the cost increase for 7nm is pretty close to the density increase, and of course, NVidia will also migrate to 7nm and beyond as well.
 
One thing people constantly overlook, when the making the "small die" argument on 7nm, is that it costs nearly twice as much for 7nm silicon area, making that argument just about entirely moot.

If you want to start comparing, compare the transistor counts in play, instead of die size, because that is a better comparison point even across process, as the cost increase for 7nm is pretty close to the density increase, and of course, NVidia will also migrate to 7nm and beyond as well.

The reason they bring that up is the 'AMD costs more to make' legend persists to this day. In reality if you look at the die sizes (costed per/mm² not by transistor) chart, the costs are about the same for a 2060/70 die and 5700 series.
 

Attachments

  • amd yield die mm2 area ryzen rome zen.png
    amd yield die mm2 area ryzen rome zen.png
    371.1 KB · Views: 0
In reality if you look at the die sizes (costed per/mm² not by transistor) chart, the costs are about the same for a 2060/70 die and 5700 series.

Which was exactly my point.

A point that you completely ignored in your previous post, where you said a high end Navi would only be 2060 die size, neglecting to mention, that 7nm die of that size would probably cost close to the TU102 die prices. Even worse, your small die post really implies it would be a huge AMD advantage, being that the 2060/70 die 77% larger, without any other qualifiers.

A transistor comparison is more applicable across the process divide and can show how far AMD was behind, and how they have caught up.

Vega (64): 12.5 Billion Transistors
Pascal (1080): 7.2 Billion Transistors.

In the Vega/Pascal era, AMD needed ~74% more Transistors, just to deliver equivalent performance. Which is why it did cost much more to build AMD GPUs back in that generation (along with bonus extra cost for HBM).

Navi (5700xt): 10.3 Billion Transistors
Turing (2070): 10.8 Billion Transistors

Now they are running about parity on the amount of transistors for equivalent performance, and AMD is using the same GDDR6, so it is about cost parity.
 
Which was exactly my point.

A point that you completely ignored in your previous post, where you said a high end Navi would only be 2060 die size, neglecting to mention, that 7nm die of that size would probably cost close to the TU102 die prices. Even worse, your small die post really implies it would be a huge AMD advantage, being that the 2060/70 die 77% larger, without any other qualifiers.

A transistor comparison is more applicable across the process divide and can show how far AMD was behind, and how they have caught up.

Vega (64): 12.5 Billion Transistors
Pascal (1080): 7.2 Billion Transistors.

In the Vega/Pascal era, AMD needed ~74% more Transistors, just to deliver equivalent performance. Which is why it did cost much more to build AMD GPUs back in that generation (along with bonus extra cost for HBM).

Navi (5700xt): 10.3 Billion Transistors
Turing (2070): 10.8 Billion Transistors

Now they are running about parity on the amount of transistors for equivalent performance, and AMD is using the same GDDR6, so it is about cost parity.
Very close... The 2070 super is 13.6 billion, and the 5700xt is within a few percent on average. Subtract the # of transistors for tensor cores and you're pretty close to even. 7nm should be interesting to see how Nvidia do with it.
 
Very close... The 2070 super is 13.6 billion, and the 5700xt is within a few percent on average. Subtract the # of transistors for tensor cores and you're pretty close to even. 7nm should be interesting to see how Nvidia do with it.

2070 makes way more sense to compare against than the 2070S since the 2070S has a portion of the die disabled.

The 2070 is 2% slower in rasterized and has 5% more transistors than the 5700XT.
 
Very close... The 2070 super is 13.6 billion, and the 5700xt is within a few percent on average. Subtract the # of transistors for tensor cores and you're pretty close to even. 7nm should be interesting to see how Nvidia do with it.

Regular 2070 is within a couple of percent of the 5700Xt, and is a better comparison as it, and the 5700xt are fully enabled chips.

Comparing a chip that has high number of units disabled, makes for a poor comparison.
 
2070 makes way more sense to compare against than the 2070S since the 2070S has a portion of the die disabled.

The 2070 is 2% slower in rasterized and has 5% more transistors than the 5700XT.
So... Your telling me the transistor count is before they disable stuff? Who the heck uses that number. If that's the case, I'm sorry, I thought it was the enabled portion as why would anyone give you the disabled count.
So using the 2070 as baseline, remove the tensor core and performance per transistor is still a little low for amd compared to nvidia. (Ignoring all other factors of course).
 
Back
Top