NVIDIA GeForce RTX 4070 Reviews


Canada Computers is already giving away a RAM kit with purchase, and I have already seen some models on sale locally here, for a card that just launched.
 
Well maybe some hope for mankind not falling for this poor value of a card. AMD had 12gb cards for less than $500 last generation that still kick ass. $499 or below, even with all the Nvidia exclusive features. Probably more coming into play here, the rapid realization of how last generation of Nvidia cards are failing to keep up with newer better looking games due to lack of VRAM, letting down a number of customers who actually paid way more than they should on these cards due to way over MSRP pricing. The glut of Ampere GPUs stocked piled for mining and not gamers from Nvidia. Anyways just an opinion people are getting sick of Nvidia shit.
 
Yep, turns out not many people want to buy the RTX 4070 for $600. Who could have guessed?


That's kind of misleading because basically, only some OC'd models were brought down from $6XX to $599. I haven't seen anything actually less than $599 yet.
 
Okay... and?

The cards by themselves are perfectly capable, but having a feature that can turn a 4070Ti into a 4090, granted with a minor hit in IQ, with the with the click of one, or maybe two buttons is by all accounts an awesome feature. Why do you think AMD's followed suit? FSR, and now FSR "Fluid Frames" are only there because Nvidia showed how beneficial features like this would be. When you can reduce the load on the GPU itself, effectively lowering power consumption it will appeal to that crowd of people who value low power consumption. When you can bump your frames up massively in a game where you may not be getting playable frame rates with all the bells and whistles turned up to max, it will appeal to that crowd. It's a win-win in every situation.

DLSS will eventually get to the point to where you won't be able to tell the difference between native and DLSS, and with Frame Generation in the mix going forward, again I don't understand why it gets flack, outside of a lot of the AMD crowd finding reasons to poke at Nvidia instead of demanding AMD do something more than just slap a bunch of VRAM on a card while offering last gen RT performance, and still relatively high power consumption compared to their Nvidia counterparts for slightly better raster performance, and nothing else. This is one of the reasons I didn't go AMD this time around--they're not innovating, they're playing catch up. Frame Generation is just another example of how AMD just isn't interested in innovation until Nvidia does it first.

The only negative I can see coming from techs like FSR or DLSS is it has the possibility of making AAA dev's lazier with their PC ports.
Try this, just came out too

 
7id7ok.jpg
 
Try this, just came out too


I like that he touched on the DLSS/FSR update issues and that he mentioned that you can manually update DLSS if you feel the need. Nvidia creating the auto-update program for DLSS is a good thing and I didn't know they had done that.
I am not terribly surprised by the results especially at 1440p given that upscaling from 1080p to 1440p is a much smaller jump than upscaling from 1440p to 4K, I wonder if Epic's texture streaming pipeline could be configured to say upscale from 2K to 4K or something like that so it is less of a jump.
 
Try this, just came out too


I like that he touched on the DLSS/FSR update issues and that he mentioned that you can manually update DLSS if you feel the need. Nvidia creating the auto-update program for DLSS is a good thing and I didn't know they had done that.
I am not terribly surprised by the results especially at 1440p given that upscaling from 1080p to 1440p is a much smaller jump than upscaling from 1440p to 4K, I wonder if Epic's texture streaming pipeline could be configured to say upscale from 2K to 4K or something like that so it is less of a jump.
What about this?

https://www.tomshardware.com/news/rtx-4070-tested-with-pentium

 
970 was roughly equivalent to a 780 Ti - for $329
1070 was equivalent performance to a 980 Ti / Titan X - for $379
2070 was a bit worse than a 1080 Ti - for $499
2070 Super was roughly equivalent or beat a 1080 Ti - for $499
3070 was equivalent performance to a 2080 Ti - for $499
4070 is roughly equal to, but can still lose to a 3080 (regular, not Ti) - for $599

So not only is the 4070 not even matching the performance characteristics of previous 70-class branded GPUs, Nvidia has the gall to also charge you $100 extra minimum, for it compared to those. x70 cards for many generations now have always met or beat the previous generation flagship. Here it trades blows with a 3080. Big whoop.

4070 and 4070 Ti are misbranded. 4070 Ti should be the 4070 (for 4070 prices), and the 4070 is really a 4060 Ti...for $600.
 
What about this?
I think this show 'well' a misconception about frame generation.

There an idea that you will need a big machine for it to help because you need 70 fps or so native to have a perfect experience.

That does not take into account that 40fps instead of 22 native is maybe the moment where the tech is the most beneficial, even if it is still unacceptable to some and it become less and less a boost has you goes up.

The notion that going from 35 to 60fps via frame generation make you still with a bit worst than 35 latency so it worthless, does not take into account that for most people it will be a much better experience, if for some reason a game would run only at 35 on their system.
 
Strong opinion since the competition is lacking in hardware performance in the top end. Are you going to complain about them naming their competitor a **90 series card instead of a **80 series card that they continually crow it's supposed to compete against, or is this a one sided hate thing that is so prevalent on this board now?

Dude you are one of only a few on here playing the "sides" game. Your entire argument in this thread had been to downplay any negative comments about nvidia with the justification that "the other side would do it if reversed position". Newsflash bud... most of us will call out any manufacturer for their bs and dont pick "sides".
 

I like this and it brings up a few questions for me, like what if this is a play by Nvidia for the console market?

I don’t see the state of GPU’s getting to the place where any console is going to do 60-120 fps 4K native any time soon.

But an Nvidia Grace CPU paired with something that could hold 60fps with some form of DLSS then use frame gen to take it to 120? With reflex and the likes that would still have less latency than the existing consoles?

AMD’s margins in the PS5 and Xbox show it is a very profitable market to be in right now and this makes me wonder if Nvidia has more at play here than they are letting on.
 
I like this and it brings up a few questions for me, like what if this is a play by Nvidia for the console market?
I'm sure Nvidia would love to be in the next Xbox and Playstation, but I don't see that happening. The choice for going x86 on consoles was done to help with developers since a lot of them are working on PC as well. Also backwards compatibility plays a role, and dumping AMD for Nvidia means emulation is going to be harder to pull off.
I don’t see the state of GPU’s getting to the place where any console is going to do 60-120 fps 4K native any time soon.
Technically the PS5 and Xbox Series S were already doing 60-120 fps 4k, but that's with games from the PS4/Xbone. Once newer games started coming out, we're right back down to 30 fps.
But an Nvidia Grace CPU paired with something that could hold 60fps with some form of DLSS then use frame gen to take it to 120? With reflex and the likes that would still have less latency than the existing consoles?
There's nothing stopping anyone from using FSR or similar on current gen consoles. More than likely, Sony and Microsoft coming up with their own version of this technology. DLSS and FSR are not hardware dependent, as FSR has proven.
AMD’s margins in the PS5 and Xbox show it is a very profitable market to be in right now and this makes me wonder if Nvidia has more at play here than they are letting on.
Nvidia's problem with the console market is that they don't play nice with anyone. Remember the OG Xbox was powered by Nvidia and Microsoft never went back to them again, for good reasons. Sony as well with the PS3. In order to cut costs Sony and Microsoft will find new methods to make their consoles, and Nvidia is never one to willingly cut costs. Unless something drastically happens, the next PS6 and Xbox Next will likely still use AMD.
 
I'm sure Nvidia would love to be in the next Xbox and Playstation, but I don't see that happening. The choice for going x86 on consoles was done to help with developers since a lot of them are working on PC as well. Also backwards compatibility plays a role, and dumping AMD for Nvidia means emulation is going to be harder to pull off.

Technically the PS5 and Xbox Series S were already doing 60-120 fps 4k, but that's with games from the PS4/Xbone. Once newer games started coming out, we're right back down to 30 fps.

There's nothing stopping anyone from using FSR or similar on current gen consoles. More than likely, Sony and Microsoft coming up with their own version of this technology. DLSS and FSR are not hardware dependent, as FSR has proven.

Nvidia's problem with the console market is that they don't play nice with anyone. Remember the OG Xbox was powered by Nvidia and Microsoft never went back to them again, for good reasons. Sony as well with the PS3. In order to cut costs Sony and Microsoft will find new methods to make their consoles, and Nvidia is never one to willingly cut costs. Unless something drastically happens, the next PS6 and Xbox Next will likely still use AMD.
Don’t they already got this?

Microsoft DirectX DirectML technology an alternative to DLSS

https://www.guru3d.com/news-story/microsoft-eying-directml-as-dlss-alternative-on-xbox.html
 
Off topic, but I'll bite.
Dude you are one of only a few on here playing the "sides" game. Your entire argument in this thread had been to downplay any negative comments about nvidia with the justification that "the other side would do it if reversed position". Newsflash bud... most of us will call out any manufacturer for their bs and dont pick "sides".
Sure, we see it all the time here on these forums, right? No, I didn't think so.

1681681302371.png

And I have not downplayed a negative comment concerning Nvidia. Do not mistake my silence for fanboy'ism (if that's even a term).
 
I'm sure Nvidia would love to be in the next Xbox and Playstation, but I don't see that happening. The choice for going x86 on consoles was done to help with developers since a lot of them are working on PC as well. Also backwards compatibility plays a role, and dumping AMD for Nvidia means emulation is going to be harder to pull off.

Technically the PS5 and Xbox Series S were already doing 60-120 fps 4k, but that's with games from the PS4/Xbone. Once newer games started coming out, we're right back down to 30 fps.

There's nothing stopping anyone from using FSR or similar on current gen consoles. More than likely, Sony and Microsoft coming up with their own version of this technology. DLSS and FSR are not hardware dependent, as FSR has proven.

Nvidia's problem with the console market is that they don't play nice with anyone. Remember the OG Xbox was powered by Nvidia and Microsoft never went back to them again, for good reasons. Sony as well with the PS3. In order to cut costs Sony and Microsoft will find new methods to make their consoles, and Nvidia is never one to willingly cut costs. Unless something drastically happens, the next PS6 and Xbox Next will likely still use AMD.
They never went back because IBM made Microsoft and Sony an offer they couldn’t refuse. But that was then this is now, change 2 members of a board and you will find the dynamic and direction of a company changes completely. AMD made sense after that because they were the next one stop shop for CPU and GPU which keeps integration all in one place which saves time and money. It wasn’t an Intel is being an asshole or an Nvidia are jerks thing, it was coordinating 2 distinctly different teams on a single project while keeping costs in check while both swear their solution is the correct one and the problem is caused by the other guys. That is expensive and a headache.

You put too much weight on architecture changes from ARM to x86 that from a developer perspective is nothing that is a change in a compiler C++, C#, Python, none of that cares, the biggest changes are memory and storage those are the hardest parts to deal with and those change every console generation regardless of core architecture. PS6 or XBox Sxt or what ever they call them will have new GPU architecture, with new memory interfaces with new storage protocols and those are the shitty parts to deal with, and those are what give developers a hard time.
 
970 was roughly equivalent to a 780 Ti - for $329
1070 was equivalent performance to a 980 Ti / Titan X - for $379
2070 was a bit worse than a 1080 Ti - for $499
2070 Super was roughly equivalent or beat a 1080 Ti - for $499
3070 was equivalent performance to a 2080 Ti - for $499
4070 is roughly equal to, but can still lose to a 3080 (regular, not Ti) - for $599

So not only is the 4070 not even matching the performance characteristics of previous 70-class branded GPUs, Nvidia has the gall to also charge you $100 extra minimum, for it compared to those. x70 cards for many generations now have always met or beat the previous generation flagship. Here it trades blows with a 3080. Big whoop.

4070 and 4070 Ti are misbranded. 4070 Ti should be the 4070 (for 4070 prices), and the 4070 is really a 4060 Ti...for $600.
Ft3PZzIXsAAlFUT.jpeg
 
Off topic, but I'll bite.

Sure, we see it all the time here on these forums, right? No, I didn't think so.

View attachment 564733
And I have not downplayed a negative comment concerning Nvidia. Do not mistake my silence for fanboy'ism (if that's even a term).

OK let me help you out then since maybe you don't notice it.... One of the negative comments was problems related to lack of VRAM. You downplayed this by complaining that it was the fault of shitty ports. While it may indeed be true that this situation in some cases is the fault of a bad port, the fact of the matter is that bad ports happen and will continue to happen. It would not be a problem to the user if the cards had some extra vram to handle these types of situation. Sure it might get fixed in patch at some later date but you can not deny that this is and will continue to cause impact to user in these situations. Waiting for a patch to improve performance is not a great position to be in however the situation arises. So you downplayed the issue and then Segway'd in to your usual tactics of "the other side". Don't get me wrong I am sure there are some users on this forum that post stuff hating on nvidia all the time while extolling the virtues of AMD but you have to realise that you are doing the same thing in reverse non stop. Basically it seems like you want to insult the whole forum and paint us all as NVIDA haters.
1681685196642.png
 
OK let me help you out then since maybe you don't notice it.... One of the negative comments was problems related to lack of VRAM. You downplayed this by complaining that it was the fault of shitty ports. While it may indeed be true that this situation in some cases is the fault of a bad port, the fact of the matter is that bad ports happen and will continue to happen. It would not be a problem to the user if the cards had some extra vram to handle these types of situation. Sure it might get fixed in patch at some later date but you can not deny that this is and will continue to cause impact to user in these situations. Waiting for a patch to improve performance is not a great position to be in however the situation arises. So you downplayed the issue and then Segway'd in to your usual tactics of "the other side". Don't get me wrong I am sure there are some users on this forum that post stuff hating on nvidia all the time while extolling the virtues of AMD but you have to realise that you are doing the same thing in reverse non stop. Basically it seems like you want to insult the whole forum and paint us all as NVIDA haters.
View attachment 564749
You seem to not understand the difference between downplaying and stating facts; let me help you with that.

Downplaying something is when something is very wrong and it is being promoted as not so wrong. Each one of those post have nothing to do with Nvidia having done anything wrong. In fact, those posts are about people's opinions! Let me make something very clear to you, if you are annoyed or angry at me pointing out people's double standards and feel attacked; then you might want to look in the mirror because you might just be mad at someone pointing out the very double standards you yourself hold.

In any case, I've now found another poster to easily ignore on this forum. Thank you for that. (y)
 
Last edited:
Wait, you are shocked? Well, you shouldn't be. You are very open about your bias toward AMD. I haven't seen a single instance of you pretending otherwise. Nothing wrong with that, as long as you aren't spewing misinformation because of that bias. :coffee:
 
You downplayed the issue by suggesting a patch will fix it and that it's someone elses fault.
 
if you are annoyed or angry at me pointing out people's double standards and feel attacked; then you might want to look in the mirror because you might just be mad at someone pointing out the very double standards you yourself hold.

You just described your own behavior to a T
 
970 was roughly equivalent to a 780 Ti - for $329
1070 was equivalent performance to a 980 Ti / Titan X - for $379
2070 was a bit worse than a 1080 Ti - for $499
2070 Super was roughly equivalent or beat a 1080 Ti - for $499
3070 was equivalent performance to a 2080 Ti - for $499
4070 is roughly equal to, but can still lose to a 3080 (regular, not Ti) - for $599

So not only is the 4070 not even matching the performance characteristics of previous 70-class branded GPUs, Nvidia has the gall to also charge you $100 extra minimum, for it compared to those. x70 cards for many generations now have always met or beat the previous generation flagship. Here it trades blows with a 3080. Big whoop.

4070 and 4070 Ti are misbranded. 4070 Ti should be the 4070 (for 4070 prices), and the 4070 is really a 4060 Ti...for $600.
So by your logic:

7900XT should really be the 7700XT, since the 3070 was the 6750XT's* competition. Since the 4070Ti and 7900XT, by your own logic are successors to the 3070 and 6750XT.
7900XTX should actually be the 7800XT since the 6800XT was competing against the 3080. Since the 7900XTX and 4080 compete directly this is an apt comparison.

Here's a semi-breakdown based on the original MSRP's at launch, with comparison to your idea of where they should land in the GPU hierarchy:

6750XT* was $550 while the 3070 was $500. +$50 - Nvidia wins in price.
7900XT launched at $900 while the 4070ti launched at $800. Price increase AMD +$350 vs Nvidia +$300 - Nvidia wins

6800XT was $649 while the 3080 was $699 +50 - AMD Wins
7900XTX launched at $1000, and the 4080 launched at $1200. Price increase AMD +$350 vs Nvidia +$500 - AMD wins

Based on that Nvidia is actually less guilty of price gouging in the mid-tier than AMD is if you use your logic of the 4070Ti being misbranded, and instead should have been a 4070. AMD's just as guilty for misbranding and they are more guilty of charging more money. It's in the high end where AMD should be dinged hard for misbranding their 7900XTX since it's their halo card competing against an 80-level card, but in this comparison Nvidia is charging a hell of a lot more than AMD, but AMD's markup is not so innocent either, just not as bad as Nvidia's.

Based on the rumors so far (take with a big grain of salt): the 7700XT should launch with a $599 MSRP, same as the 4070, and the 7800XT should MSRP at $699 which will pin it in between the 4070 and 4070Ti. On the Nvidia side the rumored price for the 4060Ti is $499. So, both sides are guilty as sin for this generations overpriced GPU's.

So based off of your logic of the 4070 actually being a successor to the 3060Ti, it's direct competitor was the 6700XT with the 7700XT being it's successor, again with the MSRP's at launch:

6700XT $480 while the 3060Ti was $400. +80 - Nvidia wins.
7700XT** $600 while the 4060Ti $500. Price Increase +120 AMD vs +100 Nvidia - Nvidia wins.

I can't really put the 7800XT in here because it has no direct competitor, AMD's prediction is it will fall in line with the 6950XT which puts it at below the 4070Ti, but above the 4070.

*I opted to use the 6750XT since the 6700XT more often than not got beat by the 3070, thus it wasn't a real competitor to it. The 6700XT competed more with the 3060Ti.

**I opted not to use the 7600XT since AMD's prediction is that it will compete with the 6700XT which is most likely going to be well behind the 4060Ti.

I chose to use MSRP's at launch because I wanted to reflect what each company wanted to charge at launch, not what scalpers were selling them at. Since you feel that Nvidia's guilty of misbranding their 40-series cards, I felt it was apt to compare last gens cards pricing, and the price increase with their logical successors based on your statement that the 4070Ti should actually be the 4070, and the 4070 should actually be the 4060Ti. What I've concluded was that Nvidia is gouging on their 4080 card, while in the lower tiers, there was a price increase, but not as much of an increase as AMD's were if the rumors are true, again basing it on where you, based on your slipper slope statement, feel each card should land on the GPU hierarchy. AMD is also just as guilty, if not more for misbranding, their halo product is competing directly with Nvidia's second best GPU, and comparing it to last gens launches, would have actually made the 7900XTX the successor to the 6800XT.

tl;dr - in a nutshell, both companies are guilty of misbranding and overcharging, but AMD is actually the more guilty party if we were to use your logic.
 
They never went back because IBM made Microsoft and Sony an offer they couldn’t refuse. But that was then this is now, change 2 members of a board and you will find the dynamic and direction of a company changes completely. AMD made sense after that because they were the next one stop shop for CPU and GPU which keeps integration all in one place which saves time and money. It wasn’t an Intel is being an asshole or an Nvidia are jerks thing, it was coordinating 2 distinctly different teams on a single project while keeping costs in check while both swear their solution is the correct one and the problem is caused by the other guys. That is expensive and a headache.
From what I remember it was Nvidia unwilling to lower prices for Microsoft. Apple left Nvidia because of the whole fiasco with their chips failing and Nvidia never stood behind their failed products.
You put too much weight on architecture changes from ARM to x86 that from a developer perspective is nothing that is a change in a compiler C++, C#, Python, none of that cares,
It's a lot more involved when you have to optimize and fix bugs. Also backwards compatibility, which is why AMD's RDNA2 is very similar to GCN.
the biggest changes are memory and storage those are the hardest parts to deal with and those change every console generation regardless of core architecture. PS6 or XBox Sxt or what ever they call them will have new GPU architecture, with new memory interfaces with new storage protocols and those are the shitty parts to deal with, and those are what give developers a hard time.
Not sure how those are a problem. GPU's are even less of a problem, from an emulation point of view.
 
From what I remember it was Nvidia unwilling to lower prices for Microsoft. Apple left Nvidia because of the whole fiasco with their chips failing and Nvidia never stood behind their failed products.
They were but that was because they offered them too low a price so they sold at a loss year 1 broke even year 2 we’re making a profit year 3, but price renegotiation was never worked into the supply deal, so Microsoft wanted to change the contract mid run. It went to arbitration and they sided more on Nvidia’s side than Microsoft’s, because Nvidia had a clear breakdown on how they worked their margin for development and such into their plan. Microsoft just wanted a way to price match Sony who was undercutting them.

The CPU stuff there are some optimizations but it is rarely the CPU you are really optimizing for at this point, ARM and x86 are about as optimized as they are getting it’s a memory and loading thing, how they store textures what textures they are loading pipeline modifications, changing up meshes and stuff to let them cheat on a texture to improve performance here or there. But how ARM and x86 interface with memory now is close enough that it wouldn’t change much.
But the amount of “optimizing” developers do for consoles is insane and 99% of it is done in the art department, changing a shade of grey or the thickness of a black border can mean you save a gig of ram and squeeze an extra 5fps out of a level just based on how it reacts with light.
 
6750XT* was $550 while the 3070 was $500. +$50 - Nvidia wins in price.
7900XT launched at $900 while the 4070ti launched at $800. Price increase AMD +$350 vs Nvidia +$300 - Nvidia wins
To be fair, the 4070ti originally marketed as a $900 card which but as a "4080 12GB edition", which is probably why AMD priced the 7900xt at that same price point and got bamboozled with it.
 
To be fair, the 4070ti originally marketed as a $900 card which but as a "4080 12GB edition", which is probably why AMD priced the 7900xt at that same price point and got bamboozled with it.
AMD priced the 7900xt where they did because they don't want to make them, and they don't want you to buy them. AMD needs for the 6000 series to sell out they can't afford to have it sit there, because if it sticks around too long they will have to buy them back and their investors will sue them for it just as Nvidias investors do every time Nvidia has had to buy back left over cards, most notoriously the excess of 1060's they had to buy back from Gigabyte that was a very big deal and AMD doesn't want a repeat of it.
That and the 7900XT is made of silicon that couldn't quite make it as a 7900xtx, which given the chiplet nature and the small silicon size has a very low failure rate so they really don't have many and AMD does not want to be in a position where they are artificially gimping 7900xtx silicon to fill a market need.
 
So by your logic:

7900XT should really be the 7700XT, since the 3070 was the 6750XT's* competition. Since the 4070Ti and 7900XT, by your own logic are successors to the 3070 and 6750XT.
7900XTX should actually be the 7800XT since the 6800XT was competing against the 3080. Since the 7900XTX and 4080 compete directly this is an apt comparison.

Here's a semi-breakdown based on the original MSRP's at launch, with comparison to your idea of where they should land in the GPU hierarchy:

6750XT* was $550 while the 3070 was $500. +$50 - Nvidia wins in price.
7900XT launched at $900 while the 4070ti launched at $800. Price increase AMD +$350 vs Nvidia +$300 - Nvidia wins

6800XT was $649 while the 3080 was $699 +50 - AMD Wins
7900XTX launched at $1000, and the 4080 launched at $1200. Price increase AMD +$350 vs Nvidia +$500 - AMD wins

Based on that Nvidia is actually less guilty of price gouging in the mid-tier than AMD is if you use your logic of the 4070Ti being misbranded, and instead should have been a 4070. AMD's just as guilty for misbranding and they are more guilty of charging more money. It's in the high end where AMD should be dinged hard for misbranding their 7900XTX since it's their halo card competing against an 80-level card, but in this comparison Nvidia is charging a hell of a lot more than AMD, but AMD's markup is not so innocent either, just not as bad as Nvidia's.

Based on the rumors so far (take with a big grain of salt): the 7700XT should launch with a $599 MSRP, same as the 4070, and the 7800XT should MSRP at $699 which will pin it in between the 4070 and 4070Ti. On the Nvidia side the rumored price for the 4060Ti is $499. So, both sides are guilty as sin for this generations overpriced GPU's.

So based off of your logic of the 4070 actually being a successor to the 3060Ti, it's direct competitor was the 6700XT with the 7700XT being it's successor, again with the MSRP's at launch:

6700XT $480 while the 3060Ti was $400. +80 - Nvidia wins.
7700XT** $600 while the 4060Ti $500. Price Increase +120 AMD vs +100 Nvidia - Nvidia wins.

I can't really put the 7800XT in here because it has no direct competitor, AMD's prediction is it will fall in line with the 6950XT which puts it at below the 4070Ti, but above the 4070.

*I opted to use the 6750XT since the 6700XT more often than not got beat by the 3070, thus it wasn't a real competitor to it. The 6700XT competed more with the 3060Ti.

**I opted not to use the 7600XT since AMD's prediction is that it will compete with the 6700XT which is most likely going to be well behind the 4060Ti.

I chose to use MSRP's at launch because I wanted to reflect what each company wanted to charge at launch, not what scalpers were selling them at. Since you feel that Nvidia's guilty of misbranding their 40-series cards, I felt it was apt to compare last gens cards pricing, and the price increase with their logical successors based on your statement that the 4070Ti should actually be the 4070, and the 4070 should actually be the 4060Ti. What I've concluded was that Nvidia is gouging on their 4080 card, while in the lower tiers, there was a price increase, but not as much of an increase as AMD's were if the rumors are true, again basing it on where you, based on your slipper slope statement, feel each card should land on the GPU hierarchy. AMD is also just as guilty, if not more for misbranding, their halo product is competing directly with Nvidia's second best GPU, and comparing it to last gens launches, would have actually made the 7900XTX the successor to the 6800XT.

tl;dr - in a nutshell, both companies are guilty of misbranding and overcharging, but AMD is actually the more guilty party if we were to use your logic.

Pretty spot on. Both corporations are guilty of trying to get the most amount of money while giving you the least. The only difference I see if one hides behind the guise of, "for the people" BS that keeps getting regurgitated ad nauseum.

AMD priced the 7900xt where they did because they don't want to make them, and they don't want you to buy them. AMD needs for the 6000 series to sell out they can't afford to have it sit there, because if it sticks around too long they will have to buy them back and their investors will sue them for it just as Nvidias investors do every time Nvidia has had to buy back left over cards, most notoriously the excess of 1060's they had to buy back from Gigabyte that was a very big deal and AMD doesn't want a repeat of it.
That and the 7900XT is made of silicon that couldn't quite make it as a 7900xtx, which given the chiplet nature and the small silicon size has a very low failure rate so they really don't have many and AMD does not want to be in a position where they are artificially gimping 7900xtx silicon to fill a market need.

This is also spot on, both companies have been pushing for people to buy older stock this whole time. One company just happens to have a product aimed squarely at the, "money is no object; give me performance, price be damned".
 
Last edited:
Don’t they already got this?

Microsoft DirectX DirectML technology an alternative to DLSS

https://www.guru3d.com/news-story/microsoft-eying-directml-as-dlss-alternative-on-xbox.html
Direct ML is an API that somebody could use to create a model that could be used to create an open version of DLSS. But DLSS is a series of CUDA libraries.

DirectML, OpenCL, and 2 or 3 others are needed to do what CUDA does, yeah they are open but many are poorly documented and sparsely supported And rely on 2 or 3 different environments which have some interesting cross incompatibility.

That and the best DirectML currently is still maybe 1/3’rd the speed of CUDA’s libraries with 1/10’th the content.

Somebody will have to spend a decade and a few billion there to catch up. Intel is doing just that as part of the OneAPI initiative but it will take a lot of work before it gets anywhere.
 
Last edited:
Not even "money is no object" just "Nvidia, whatever new card this much hundreds of money gets me"

If money was no object:

839696_EYA1A2DXkAEkyk3.jpg
 
tl;dr - in a nutshell, both companies are guilty of misbranding and overcharging, but AMD is actually the more guilty party if we were to use your logic.
I don't ever recall white-knighting for AMD so I don't get your point. I totally agree that they are both guilty. This is a 4070 thread, so I posted that.

What's your point?
 
Somebody will have to spend a decade and a few billion there to catch up. Intel is doing just that as part of the OneAPI initiative but it will take a lot of work before it gets anywhere.
Or they could base it on FSR. Even Apple has an upscaller called MetalFX. DLSS is only a serious feature if you plan to use Ray-Tracing, and maybe if your older GPU can't play games at the frame rate you desire. Considering this is a RTX 4070 we're talking about, you will probably only use DLSS for Ray-Tracing. Nvidia's DLSS is also limited to their newer GPU's which makes adoptions harder. Especially when there are tools that can inject FSR into any game. As a Linux user I can just use a command before starting a game, or just place it in the .profile so it's on when my PC boots. In a sense, FSR is natively supported on Linux, so no need to have a questionable injector to enable it.

Code:
export WINE_FULLSCREEN_FSR=1
export WINE_FULLSCREEN_FSR_STRENGTH=3
 
  • Like
Reactions: kac77
like this
Try this, just came out too


Yeah not a fan. That review had some errors in it as some have pointed out on Twitter. One of these rendering modes had issues and it wasn't FSR.
Ft2PNW9WIAYMiZo?format=jpg&name=large.jpg

In this shot DLSS just decides to make nuclear rain.

Ft2BsySX0AcOWWS?format=jpg&name=4096x4096.jpg
 
Or they could base it on FSR. Even Apple has an upscaller called MetalFX. DLSS is only a serious feature if you plan to use Ray-Tracing, and maybe if your older GPU can't play games at the frame rate you desire. Considering this is a RTX 4070 we're talking about, you will probably only use DLSS for Ray-Tracing. Nvidia's DLSS is also limited to their newer GPU's which makes adoptions harder. Especially when there are tools that can inject FSR into any game. As a Linux user I can just use a command before starting a game, or just place it in the .profile so it's on when my PC boots. In a sense, FSR is natively supported on Linux, so no need to have a questionable injector to enable it.

Code:
export WINE_FULLSCREEN_FSR=1
export WINE_FULLSCREEN_FSR_STRENGTH=3
Hmm so FSR whenever wine is evoked. Nice.
 
970 was roughly equivalent to a 780 Ti - for $329
1070 was equivalent performance to a 980 Ti / Titan X - for $379
2070 was a bit worse than a 1080 Ti - for $499
2070 Super was roughly equivalent or beat a 1080 Ti - for $499
3070 was equivalent performance to a 2080 Ti - for $499
4070 is roughly equal to, but can still lose to a 3080 (regular, not Ti) - for $599

So not only is the 4070 not even matching the performance characteristics of previous 70-class branded GPUs, Nvidia has the gall to also charge you $100 extra minimum, for it compared to those. x70 cards for many generations now have always met or beat the previous generation flagship. Here it trades blows with a 3080. Big whoop.

4070 and 4070 Ti are misbranded. 4070 Ti should be the 4070 (for 4070 prices), and the 4070 is really a 4060 Ti...for $600.
I'm a bit annoyed that in the reviews I've seen they ALWAYS compare the 4070 to a 10GB 3080 and not the 12GB variant. It looses to the lowly 10G @ 4K by a fair margin and I suspect that would notch up compared to the 12GB as it was better hardware and had 2GB more RAM.
Anyone have 308012GB vs. 4070 4K/VR graphs to share to shame the 4070 more?
 
Last edited:
Or they could base it on FSR. Even Apple has an upscaller called MetalFX. DLSS is only a serious feature if you plan to use Ray-Tracing, and maybe if your older GPU can't play games at the frame rate you desire. Considering this is a RTX 4070 we're talking about, you will probably only use DLSS for Ray-Tracing. Nvidia's DLSS is also limited to their newer GPU's which makes adoptions harder. Especially when there are tools that can inject FSR into any game. As a Linux user I can just use a command before starting a game, or just place it in the .profile so it's on when my PC boots. In a sense, FSR is natively supported on Linux, so no need to have a questionable injector to enable it.

Code:
export WINE_FULLSCREEN_FSR=1
export WINE_FULLSCREEN_FSR_STRENGTH=3
You misunderstand what OneAPI is, OneAPI is Intel taking the clusterfuck of CUDA wannabe projects that AMD, and the Linux community as a whole have forked off and rolling it all together into one well documented and actually supported project.

FSR and FSR 2 are parts of the GPUOpen project that AMD fronted and branded then left to the community to work on. They are a series of RadeonML libraries and RadeonML is just AMD’s branding on OpenML. OpenML and OpenCL make up the basis of OneAPI, so yes FSR is already there.

But Intel is also making good headway on their reverse engineering of CUDA. Which currently they have their translation compiler at wound 80% compatible.
https://www.intel.com/content/www/u...g-from-cuda-to-sycl-for-the-dpc-compiler.html

So with work somebody could use it to reverse engineer DLSS while they are at it, which is where Intel XeSS comes in. XeSS requires accelerator cores but it works on all 3 so really AMD should be getting onboard with XeSS as it is the evolution of the projects they have attempted to start.
 
You misunderstand what OneAPI is, OneAPI is Intel taking the clusterfuck of CUDA wannabe projects that AMD, and the Linux community as a whole have forked off and rolling it all together into one well documented and actually supported project.

FSR and FSR 2 are parts of the GPUOpen project that AMD fronted and branded then left to the community to work on. They are a series of RadeonML libraries and RadeonML is just AMD’s branding on OpenML. OpenML and OpenCL make up the basis of OneAPI, so yes FSR is already there.

But Intel is also making good headway on their reverse engineering of CUDA. Which currently they have their translation compiler at wound 80% compatible.
https://www.intel.com/content/www/u...g-from-cuda-to-sycl-for-the-dpc-compiler.html

So with work somebody could use it to reverse engineer DLSS while they are at it, which is where Intel XeSS comes in. XeSS requires accelerator cores but it works on all 3 so really AMD should be getting onboard with XeSS as it is the evolution of the projects they have attempted to start.
What? This is the most confused bastardized definition of OneAPI I've ever heard of. Many contributors work on OneAPI. It's links to AMD is damn near zero other than its individual contributors to Khronos. No, it's not forked from AMD projects either. OneAPI is Intel's attempt to compete (if not beat) CUDA by using a series of open projects that cover more than what CUDA covers, very few of them (if any) started by AMD.

OpenML was started back in 2001. RadeonML is a wrapper SDK. RadeonML is what you use when you want to evoke machine learning utilizing a variety of different libraries. It's high level. OpenML is low level (apples vs oranges).

FSR is based on an open spatial upscaling project initially (1.0) but received many updates from AMD to make 2.0 temporal. It has zero to do with OneAPI as Intel's Xess utilizes machine learning (OpenML) which AMD hasn't even introduced yet (maybe FSR 3.0).
 
Last edited:
What? This is the most confused bastardized definition of OneAPI I've ever heard of. Many contributors work on OneAPI. It's links to AMD is damn near zero other than its individual contributors to Khronos. No, it's not forked from AMD projects either. OneAPI is Intel's attempt to compete (if not beat) CUDA by using a series of open projects that cover more than what CUDA covers, very few of them (if any) started by AMD.

OpenML was started back in 2001. RadeonML is a wrapper SDK. RadeonML is what you use when you want to evoke machine learning utilizing a variety of different libraries. It's high level. OpenML is low level (apples vs oranges).

FSR is based on an open spatial upscaling project initially (1.0) but received many updates from AMD to make 2.0 temporal. It has zero to do with OneAPI as Intel's Xess utilizes machine learning (OpenML) which AMD hasn't even introduced yet (maybe FSR 3.0).
I should have been more clear OneAPI is not a fork, it is Intel cleaning up all the shitty forks. There was a time when there was a unified effort to replace CUDA with an open standard, then they fought, and they forked, then they died.

I was under the mistaken impression that FSR was an ML based library.

But there is a good GitHub project working on getting the ROCm and CUDA compilers so they output things that work natively with OneAPI so the RadeonML projects should get coverage through it and based on what I see it’s coming along nicely.
https://github.com/OpenSYCL/OpenSYCL
 
Back
Top