Anyone seriously considering the 4070 TI

The bus size isn't the main variable at play here. It's the fact that the 4070 Ti only has 12Gb of VRAM; half that of the 3090 Ti. 4K uses a lot of VRAM, and the performance of the 4070 Ti drops quickly once you run out of VRAM.
They should really stop gimping their cards with relatively low amounts of vram (for the prices they're demanding) and trying to make up for it with crutches like DLSS 3.0.
 
The bus size isn't the main variable at play here. It's the fact that the 4070 Ti only has 12Gb of VRAM; half that of the 3090 Ti. 4K uses a lot of VRAM, and the performance of the 4070 Ti drops quickly once you run out of VRAM.
Yeah I was watching one review and there were a couple games where the 4070 TI was having to spill over into system ram. In those cases the 4080 had a massive increase in performance over the 4070 TI compared to games where vram was not a problem. Sometimes people look at the allocation of vram and say oh that's not what's needed but what they don't see sometimes is that there are games that don't even allocate all of your vram even though they need more and will simply just pull from system ram before technically running out of vram. An $800 video card should not be running into that problem on day one and it's of course only going to get worse in some future games.
 
Also AMD learned this with RDNA2. Extra cache only goes so far to compensate for low memory bus-width. RDNA2 was great and then definitely hit problems at 4k on the 256-bit bus vs wider 320-bit and 384-bit Nvidia Ampere offerings.

I would have liked to see an actual 4070 Ti being cutdown AD103 on 256-bit in my opinion. That would line up closer to historical 80 and 70 card specs/branding (after Fermi), and maybe make the $800 asking price slightly more palatable. Now you're lucky to get Nvidia's 3rd-tier chip yet for top dollar. It's clear this thing is s super expensive 1440p card. I'd never buy a card with 192-bit interface and at this stage, 12GB, for 4k (yes I know my 3080 Ti has 12GB, luckily not always playing AAA stuff).
 
Also AMD learned this with RDNA2. Extra cache only goes so far to compensate for low memory bus-width. RDNA2 was great and then definitely hit problems at 4k on the 256-bit bus vs wider 320-bit and 384-bit Nvidia Ampere offerings.

I would have liked to see an actual 4070 Ti being cutdown AD103 on 256-bit in my opinion. That would line up closer to historical 80 and 70 card specs/branding (after Fermi), and maybe make the $800 asking price slightly more palatable. Now you're lucky to get Nvidia's 3rd-tier chip yet for top dollar. It's clear this thing is s super expensive 1440p card. I'd never buy a card with 192-bit interface and at this stage, 12GB, for 4k (yes I know my 3080 Ti has 12GB, luckily not always playing AAA stuff).

Now I do wonder what the regular RTX 4070 will be. Probably the same thing just slightly slower. Which also makes me wonder what price would be. Probably the same just slightly lower... which would be too much for what it is.
 
Now I do wonder what the regular RTX 4070 will be. Probably the same thing just slightly slower. Which also makes me wonder what price would be. Probably the same just slightly lower... which would be too much for what it is.
Some rumors from the usual people have been pointing to 5888 cuda cores which is the same as the 3070. That seems kind of sad especially since AdaLovelace does not have an IPC improvement at all and if anything there's actual regression. That means it's just going to have to rely on the 40% increase in boost clocks so it's likely to only end up around 25 to 30% faster than the 3070.
 
Some rumors from the usual people have been pointing to 5888 cuda cores which is the same as the 3070. That seems kind of sad especially since AdaLovelace does not have an IPC improvement at all and if anything there's actual regression. That means it's just going to have to rely on the 40% increase in boost clocks so it's likely to only end up around 25 to 30% faster than the 3070.
Its tough to predict what Nvidia will do for the 4070. Because the 3070 ti was BARELY better. Despite everything they added to the card, to make it better.
 
Some rumors from the usual people have been pointing to 5888 cuda cores which is the same as the 3070. That seems kind of sad especially since AdaLovelace does not have an IPC improvement at all and if anything there's actual regression. That means it's just going to have to rely on the 40% increase in boost clocks so it's likely to only end up around 25 to 30% faster than the 3070.

Would be underwhelming if so.
 
Some rumors from the usual people have been pointing to 5888 cuda cores which is the same as the 3070. That seems kind of sad especially since AdaLovelace does not have an IPC improvement at all and if anything there's actual regression. That means it's just going to have to rely on the 40% increase in boost clocks so it's likely to only end up around 25 to 30% faster than the 3070.
Citation needed for you claim.
There is plenty in the core that have better IPC, so I suspect you are either talking against facts or just making up stuff or simply ignoring a lot of the new stuff in the core.
Feel free to prove me wrong.
 
Citation needed for you claim.
There is plenty in the core that have better IPC, so I suspect you are either talking against facts or just making up stuff or simply ignoring a lot of the new stuff in the core.
Feel free to prove me wrong.
You have the same ability as anyone else to look at the number of cores and look at the clock speeds.
 
Considering the size of the die and I can see them have good yield even if the 4070 is really close to the TI.

I am also curious about the claim of no IPC gain, is that made by looking at benchmark at same clock speed or on the assumption that clock increase should have a linear close to perfect increase in performance ?
 
Considering the size of the die and I can see them have good yield even if the 4070 is really close to the TI.

I am also curious about the claim of no IPC gain, is that made by looking at benchmark at same clock speed or on the assumption that clock increase should have a linear close to perfect increase in performance ?
The type of core used in Ada Lovelace is essentially the same type of core used in the previous architecture. All you have to do is look at the number of cores and look at the clock speeds and then look at the performance increase. Of course the higher end cards will have worse scaling within every architecture.
 
The type of core used in Ada Lovelace is essentially the same type of core used in the previous architecture. All you have to do is look at the number of cores and look at the clock speeds and then look at the performance increase. Of course the higher end cards will have worse scaling within every architecture.
But wouldn't that assume that performance increase in perfect proportion with clock speed for someone to do that ?

Would it not be possible to have some 7% ipc gain at the same clock followed by a say 20% gain for a 30% frequency gain has performance does not scale 1:1 with frequency ?
 
But wouldn't that assume that performance increase in perfect proportion with clock speed for someone to do that ?

Would it not be possible to have some 7% ipc gain at the same clock followed by a say 20% gain for a 30% frequency gain has performance does not scale 1:1 with frequency ?
You're making it way too complicated and this is not an exact science. It's just a very simple observation that anyone can do if looking at the number of cores looking at the clock speed and looking at the performance increase over last generation looking at the same exact specs. There is no IPC improvement on this architecture and that is completely clear to see. If the 4070 releases with the same exact number of cores as the 3070 and is clocked 40 to 45% higher it absolutely will not be anywhere near 40 to 45% faster. Again though higher end GPUs will always have even worse scaling.
 
You have the same ability as anyone else to look at the number of cores and look at the clock speeds.
You made the claim.
Logic dictates you provide the data for your claim.
Your "argumentation" makes me certain you have no data to back up your claim.
 
You made the claim.
Logic dictates you provide the data for your claim.
Your "argumentation" makes me certain you have no data to back up your claim.
Yeah sorry for assuming that you were capable of doing basic math.
 
Yeah sorry for assuming that you were capable of doing basic math.
You seem angry for one with out any data.
Getting called out is not pleasant when one does not have any data to support ones claims.
But your should not point the finger at me, when you are to blame.
But I can write you off as "noise" now, thank you.

(I suggest you start with the whitepaper...holds good information)
 
You seem angry for one with out any data.
Getting called out is not pleasant when one does not have any data to support ones claims.
But your should not point the finger at me, when you are to blame.
But I can write you off as "noise" now, thank you.

(I suggest you start with the whitepaper...holds good information)
You are the one from the very beginning with a sarcastic passive aggressive tone and you know it. I am just on mobile throughout most of the day but if you are truly that damn helpless then I'll put some simple math up for you later on. In the meantime if you're able to locate an adult with an IQ above 50 then they can perhaps help you out.
 
You are the one from the very beginning with a sarcastic passive aggressive tone and you know it. I am just on mobile throughout most of the day but if you are truly that damn helpless then I'll put some simple math up for you later on. In the meantime if you're able to locate an adult with an IQ above 50 then they can perhaps help you out.
Your "math" will be useless, when you find benchmarks/reviewed data that support your claim I will read it...until then you are simply "noise".
 
Some clues to IPC changes:
https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-founders-edition/2.html

Intersting how a simple "You have data for your claims?" turned into conspiracy galore.
The question is if the is a coordinated galore (a "buddy" distraction to remove focus from the missing data) or just a sad trend on forums(how dare you ask for validation of my claim!"?
Other than the re-worded marketing blurb at the top of the page, I didn't see anything there indicating much if any actual IPC increase. I do remember reading, and I don't remember where, that if you take into account the added hardware and clockspeed the actual IPC increase between the 3k and 4k cards is pretty much zero.

Since you're the one claiming that there are definite IPC increases, please show your proof since you obviously have it.
 
I searched a bit but I cannot find any single benchmark at the same clock to look at any IPC change like they do with CPU for Lovelace vs Ampere.
 
I searched a bit but I cannot find any single benchmark at the same clock to look at any IPC change like they do with CPU for Lovelace vs Ampere.
You can do the math yourself by looking at the cuda cores and clockspeeds.

Best example that puts Ada Lovelace in the best light would be the 4080 at 4k since it is not cpu limited and will not have the scaling issues of the 4090 and then compare the 4080 to the 3080.

The 4080 has 9728 cores and in real world testing in techpowerup review averages 2737 mhz. https://www.techpowerup.com/review/nvidia-geforce-rtx-4080-founders-edition/40.html

The 3080 has 8704 cores and in real world testing in techpowerup review averages 1931 mhz. https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/32.html

9728 x 2737 = 26,625,536

8704 x 1931 = 16,807,424

26,625,536 / 16,807,424 = 58%

That means with unrealistic perfect scaling and no other limitations the 4080 could be 58% faster than the 3080 if they had the same IPC. The overall difference was 49% at 4k where the 3080 was also likely running into vram constraints in a couple games. https://www.techpowerup.com/review/nvidia-geforce-rtx-4080-founders-edition/32.html

This is NOT scientific of course but does give you an idea that IPC is likely the same at best. If you used the 4090 it would look like a huge regression in IPC but big gpus don't scale well and many games run into limitations thus making the huge core count not even come close to being fully utilized.

So the POINT was earlier that if the 3070 and 4070 end up with the same core count it clearly is going to come down to the large clockspeed advantage of the 4070 to makes the difference.
 
You can do the math yourself by looking at the cuda cores and clockspeeds.

Best example that puts Ada Lovelace in the best light would be the 4080 at 4k since it is not cpu limited and will not have the scaling issues of the 4090 and then compare the 4080 to the 3080.

The 4080 has 9728 cores and in real world testing in techpowerup review averages 2737 mhz. https://www.techpowerup.com/review/nvidia-geforce-rtx-4080-founders-edition/40.html

The 3080 has 8704 cores and in real world testing in techpowerup review averages 1931 mhz. https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/32.html

9728 x 2737 = 26,625,536

8704 x 1931 = 16,807,424

26,625,536 / 16,807,424 = 58%

That means with unrealistic perfect scaling and no other limitations the 4080 could be 58% faster than the 3080 if they had the same IPC. The overall difference was 49% at 4k where the 3080 was also likely running into vram constraints in a couple games. https://www.techpowerup.com/review/nvidia-geforce-rtx-4080-founders-edition/32.html

This is NOT scientific of course but does give you an idea that IPC is likely the same at best. If you used the 4090 it would look like a huge regression in IPC but big gpus don't scale well and many games run into limitations thus making the huge core count not even come close to being fully utilized.

So the POINT was earlier that if the 3070 and 4070 end up with the same core count it clearly is going to come down to the large clockspeed advantage of the 4070 to makes the difference.
This is incredibly flawed. You are assuming that the 3080 and 4080 behave the same clock for clock, and just this fact alone should invalidate your math. Does anyone have a source for any clock for clock benches?

Without clock for clock benches, both lines of thinking are neither proven nor disproven.
 
I still feel like if performance does not fully scale up has you ramp frequency, the difference can be both better IPC at the starting same frequency + higher frequency boost (that has a diminishing return)

A 4080 has so much more transistor per core....

And even ar 4K a game is not always a pure GPU tests, to gain 49% fps just by changin a gpu that gpu need to be more than 49% stronger no ? and by a good amount.
 
This is incredibly flawed. You are assuming that the 3080 and 4080 behave the same clock for clock, and just this fact alone should invalidate your math. Does anyone have a source for any clock for clock benches?

Without clock for clock benches, both lines of thinking are neither proven nor disproven.
Any test you do including at the same clocks could be called "flawed" in some way as there are many factors involved. Bottom line is that AGAIN if both the 4070 and 3070 have the same core count it will be the clockspeed making the performance difference based on everything seen so far.
 
This is incredibly flawed. You are assuming that the 3080 and 4080 behave the same clock for clock, and just this fact alone should invalidate your math. Does anyone have a source for any clock for clock benches?

Without clock for clock benches, both lines of thinking are neither proven nor disproven.
I've got a 3080, 3090, 3070, 3060ti, but no 4XXX - someone wanna loan me one? I'll do whatever tests ya want.
 
Any test you do including at the same clocks could be called "flawed" in some way as there are many factors involved. Bottom line is that AGAIN if both the 4070 and 3070 have the same core count it will be the clockspeed making the performance difference based on everything seen so far.
You are comparing apples to oranges. Whatever you think you've seen so far is incorrect, plain and simple. Different amount of VRAM, different bandwidth and a different architecture. The fact that the 4080 can perform far above a 3080 all while consuming considerably less power should tell you that a simple cuda cores x clock speed equation is about as useful as what time of day the tests were ran and whether a groundhogs ass cheeks were flush or pale.
 
You are comparing apples to oranges. Whatever you think you've seen so far is incorrect, plain and simple. Different amount of VRAM, different bandwidth and a different architecture. The fact that the 4080 can perform far above a 3080 all while consuming considerably less power should tell you that a simple cuda cores x clock speed equation is about as useful as what time of day the tests were ran and whether a groundhogs ass cheeks were flush or pale.
And what will you say if the 4070 comes out with the same core count as the 3070 and has a performance increase just below the frequency advantage it has over the 3070? And in the last post you said it would need to be a clock for clock comparison yet now you are bringing up all the other variables even though I already said any test could be considered "flawed" earlier. Anyway I am done on this topic but I at least tried to make some effort for comparison and in my eyes there is no real IPC improvement on the cores.
 
You are comparing apples to oranges. Whatever you think you've seen so far is incorrect, plain and simple. Different amount of VRAM, different bandwidth and a different architecture. The fact that the 4080 can perform far above a 3080 all while consuming considerably less power should tell you that a simple cuda cores x clock speed equation is about as useful as what time of day the tests were ran and whether a groundhogs ass cheeks were flush or pale.
You shouldn't have brought up the amount of electricity used. The lower power consumption is almost completely due to moving from Samsung 8nm to TSMC 4nm. If anything, using power consumption as a metric is harmful to the argument that IPC is better.

This is incredibly flawed. You are assuming that the 3080 and 4080 behave the same clock for clock, and just this fact alone should invalidate your math. Does anyone have a source for any clock for clock benches?

Without clock for clock benches, both lines of thinking are neither proven nor disproven.
There's nothing wrong with his math or the way he's using it. It's not going to be 100% accurate but it is useful as a ballpark figure. It's also impossible to make a 1:1 comparison for the reason you just mentioned. However, it doesn't invalidate his argument.

It's ironic that your argument is harming your own position.

Other than slightly lower memory bandwidth compared to the 3080, the 4080 is better in every way. It has (based on official nVidia #s) approximately a 46.5% advantage in clock speed (2505mhz vs 1710mhz boost clocks) and 11.76% advantage in the number of CUDA cores (9728 vs 8704.) Based on the techpowerup graphs the 4080 has an average of 19%, 27% and 33% performance advantage over the 3080 at the various resolutions. Those numbers aren't looking good for the IPC of the 4xxx cards especially when you consider other facets of the 4080 are higher than the 3080.
 
If I didn't have a nicely performing 3080ti, and the 4070ti was more like 499 or 599, it would be a no brainer.
 
You shouldn't have brought up the amount of electricity used. The lower power consumption is almost completely due to moving from Samsung 8nm to TSMC 4nm. If anything, using power consumption as a metric is harmful to the argument that IPC is better.


There's nothing wrong with his math or the way he's using it. It's not going to be 100% accurate but it is useful as a ballpark figure. It's also impossible to make a 1:1 comparison for the reason you just mentioned. However, it doesn't invalidate his argument.

It's ironic that your argument is harming your own position.

Other than slightly lower memory bandwidth compared to the 3080, the 4080 is better in every way. It has (based on official nVidia #s) approximately a 46.5% advantage in clock speed (2505mhz vs 1710mhz boost clocks) and 11.76% advantage in the number of CUDA cores (9728 vs 8704.) Based on the techpowerup graphs the 4080 has an average of 19%, 27% and 33% performance advantage over the 3080 at the various resolutions. Those numbers aren't looking good for the IPC of the 4xxx cards especially when you consider other facets of the 4080 are higher than the 3080.
Thanks but you did misread the performance summary charts though as for example the 3080 gives 67% of the performance of the 4080 at 4k so actually that would make the 4080 49% faster not 33%.
 
Thanks but you did misread the performance summary charts though as for example the 3080 gives 67% of the performance of the 4080 at 4k so actually that would make the 4080 49% faster not 33%.
Yeah, I completely blanked out and did the wrong conversion. But it backs up your point perfectly. The 4080 has a much higher clock speed, quite a few more CUDA cores, more RAM and more cache (to help offset the small decrease in memory bandwidth) and the performance at 4k isn't much more than what you would get with perfect linear scaling of clock speed alone.

I'll reiterate regarding IPC, I don't remember where I read it but I swear in one review a conclusion was that the 4xxx architecture showed no IPC gain or possibly an IPC decrease from the 3xxx series. Based on the specs and the above averaging graph that looks to be the case. It could also be a reason why nVidia has been banging the drum for DLSS 3 so hard. The architecture improvements for performance simply aren't there unless you're using DLSS 3.
 
You shouldn't have brought up the amount of electricity used. The lower power consumption is almost completely due to moving from Samsung 8nm to TSMC 4nm. If anything, using power consumption as a metric is harmful to the argument that IPC is better.
No one is using power consumption as a metric, that was my point.
There's nothing wrong with his math or the way he's using it. It's not going to be 100% accurate but it is useful as a ballpark figure. It's also impossible to make a 1:1 comparison for the reason you just mentioned. However, it doesn't invalidate his argument.
Everything is wrong with his math, again, apples to oranges. [H] has been a staunch voice against this type of bullshit comparisons for decades.
It's ironic that your argument is harming your own position.
No, it's not; no matter how hard you pretend it to be so.
Other than slightly lower memory bandwidth compared to the 3080, the 4080 is better in every way. It has (based on official nVidia #s) approximately a 46.5% advantage in clock speed (2505mhz vs 1710mhz boost clocks) and 11.76% advantage in the number of CUDA cores (9728 vs 8704.) Based on the techpowerup graphs the 4080 has an average of 19%, 27% and 33% performance advantage over the 3080 at the various resolutions. Those numbers aren't looking good for the IPC of the 4xxx cards especially when you consider other facets of the 4080 are higher than the 3080.
Again, apples to oranges.
 
You can do the math yourself by looking at the cuda cores and clockspeeds.

Best example that puts Ada Lovelace in the best light would be the 4080 at 4k since it is not cpu limited and will not have the scaling issues of the 4090 and then compare the 4080 to the 3080.

The 4080 has 9728 cores and in real world testing in techpowerup review averages 2737 mhz. https://www.techpowerup.com/review/nvidia-geforce-rtx-4080-founders-edition/40.html

The 3080 has 8704 cores and in real world testing in techpowerup review averages 1931 mhz. https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/32.html

9728 x 2737 = 26,625,536

8704 x 1931 = 16,807,424

26,625,536 / 16,807,424 = 58%

That means with unrealistic perfect scaling and no other limitations the 4080 could be 58% faster than the 3080 if they had the same IPC. The overall difference was 49% at 4k where the 3080 was also likely running into vram constraints in a couple games. https://www.techpowerup.com/review/nvidia-geforce-rtx-4080-founders-edition/32.html

This is NOT scientific of course but does give you an idea that IPC is likely the same at best. If you used the 4090 it would look like a huge regression in IPC but big gpus don't scale well and many games run into limitations thus making the huge core count not even come close to being fully utilized.

So the POINT was earlier that if the 3070 and 4070 end up with the same core count it clearly is going to come down to the large clockspeed advantage of the 4070 to makes the difference.



I had better quote my own post too:

Citation needed for you claim.
There is plenty in the core that have better IPC, so I suspect you are either talking against facts or just making up stuff or simply ignoring a lot of the new stuff in the core.
Feel free to prove me wrong.
The data indicates that I was right, since:

- Opacity Micromap Engine = 2 x Faster Alpha Traversal;
- Displaced Micro-Mesh Engine = 10 x Faster BVH build and 20X Less BVH Space than Ampere
- Ray-triangle intersection = 2 x faster than Ampere
- Shader Execution Reordering Pipeline = 2 X performance improvement for RT shaders (up to 40% faster in the upcomming CyberPunk 2077 Overdrive raytracing)
- Optical Flow Acceleration = 2 x faster than Ampere
- Fourth-Generation Tensor Core = 2 x faster FP16, BF16, TF32, INT8 & INT4 than Ampere

But of course if you only look at "rasterized" performance you can try and tilt reality to suit a chery picked goal.
But we have left the purely "rasterized" world.
FSR 1+2(AMD), XeSS(Intel) & DLSS 1+2(Nvidia) has entered the game.
Upscaling has been used on consoles for years and now it has entered the PC market.
DLSS 3(Frame Generation) is already here, AMD has announced their own solution comming in 2023 and Intel will have to follow suit if they want to be competitive in the GPU space.
A.I is the new player in the field and it will only increace its presence.

Raytraying is also here and (unlike a vocal minority on forums) the latest sales numbers show that consumers are voting with their vallets.
The 2 vendors with dedicated Raytracing Cores (Intel and Nvidia) are increasing the marketshare while AMD (that are still missing dedicated Raytracing Cores) are losing marketsshare.

When you sum it all up, your claim is not aligned with reality, but only a cherry picked goal that no longer reflects reality.

Deflecting from this only goes to show bias but using blinders will only leave you more and more out of touch with facts.
I did not even go into the increased cache, Wraps/Threads, ROP's or your assumption of linear scaling, but this should suffice to show you that your "napkin math" is useless as I stated (way oversimplified and focusing on a part of the pie to fit a cherry picked metric).

Now please continue the personal attacks and confirm my argument about you just being "noise" ;)
 
You can post all the useless numbers and performance improvement claims you want but what matters is the actual performance.
 
No one is using power consumption as a metric, that was my point.

Everything is wrong with his math, again, apples to oranges. [H] has been a staunch voice against this type of bullshit comparisons for decades.

No, it's not; no matter how hard you pretend it to be so.

Again, apples to oranges.
[H] has never been against direct comparisons otherwise there never would have been a single hardware review.

And it's not apples to oranges. It's apples to apples or oranges to oranges whichever you prefer. It's a direct comparison between clock speeds and CUDA cores. And in the case of the 4080 it has the advantage in every single metric except for memory bandwidth which isn't that far off. Using those direct comparisons with the performance numbers (averaged) between the 3080 and 4080 of the exact same games at the exact same settings.

Just because you don't like the results of the comparisons doesn't make them wrong.

By the way, since you love to use the apples to oranges line, it's time for you to explain exactly why the numbers can't be used. Explain to everyone why one is an apple and the other is an orange.
I had better quote my own post too:


The data indicates that I was right, since:

- Opacity Micromap Engine = 2 x Faster Alpha Traversal;
- Displaced Micro-Mesh Engine = 10 x Faster BVH build and 20X Less BVH Space than Ampere
- Ray-triangle intersection = 2 x faster than Ampere
- Shader Execution Reordering Pipeline = 2 X performance improvement for RT shaders (up to 40% faster in the upcomming CyberPunk 2077 Overdrive raytracing)
- Optical Flow Acceleration = 2 x faster than Ampere
- Fourth-Generation Tensor Core = 2 x faster FP16, BF16, TF32, INT8 & INT4 than Ampere

But of course if you only look at "rasterized" performance you can try and tilt reality to suit a chery picked goal.
But we have left the purely "rasterized" world.
FSR 1+2(AMD), XeSS(Intel) & DLSS 1+2(Nvidia) has entered the game.
Upscaling has been used on consoles for years and now it has entered the PC market.
DLSS 3(Frame Generation) is already here, AMD has announced their own solution comming in 2023 and Intel will have to follow suit if they want to be competitive in the GPU space.
A.I is the new player in the field and it will only increace its presence.

Raytraying is also here and (unlike a vocal minority on forums) the latest sales numbers show that consumers are voting with their vallets.
The 2 vendors with dedicated Raytracing Cores (Intel and Nvidia) are increasing the marketshare while AMD (that are still missing dedicated Raytracing Cores) are losing marketsshare.

When you sum it all up, your claim is not aligned with reality, but only a cherry picked goal that no longer reflects reality.

Deflecting from this only goes to show bias but using blinders will only leave you more and more out of touch with facts.
I did not even go into the increased cache, Wraps/Threads, ROP's or your assumption of linear scaling, but this should suffice to show you that your "napkin math" is useless as I stated (way oversimplified and focusing on a part of the pie to fit a cherry picked metric).

Now please continue the personal attacks and confirm my argument about you just being "noise" ;)
Nice, you regurgitated a press release. Congratulations. You still haven't put up the numbers and the proof of your claim. Care to do so?
 
And it's not apples to oranges. It's apples to apples or oranges to oranges whichever you prefer.

Different amount of VRAM, different bandwidth; it is NOT apples to apples. FULL STOP

Edit: I will also be ignoring this bullshit comparison from here on out as it is already so far off topic that this thread should already get scrubbed.
 
Getting back on topic. I'm gaming on a 3440x1440 ultra wide 100hz. With 750w platinum psu. 16gb of system ram. 2070 super. Not interested in RT. Should I go 4070ti or 4080? Maybe a 7900xtx.
 
Last edited:
Back
Top