- Joined
- May 18, 1997
- Messages
- 55,635
Stop being a bunch bunch of bickering idiots. Debate, discuss, stop with the petty BS.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Until we can this for vRAM usage:
View attachment 280291
Then vRAM usage is useless...just like your whine.
Bad data is just that...bad data.
You can then try and whine and fallacy around, but doesn't change a thing...you are trying to make a conclusion, but lack the data to support your conclusion.
It's is not rocket science, so you are you acting so "ignorant"?
Well even if some of the VRAM is cached, that doesn't make the data invalid.
Obviously the cached data serves a purpose, to improve performance or reduce load / pop-in errors, etc. It's not there for nothing.
Well even if some of the VRAM is cached, that doesn't make the data invalid.
Obviously the cached data serves a purpose, to improve performance or reduce load / pop-in errors, etc. It's not there for nothing.
Stop being a bunch bunch of bickering idiots. Debate, discuss, stop with the petty BS.
Just in case anyone skimmed past this.
As for the card, 80 CUs would be beast at PS5-like clocks. Should be around 20 TF of performance.
That bandwidth though. 256 bit at GDDR6? Just doesn't seem enough.
Hmm maybe Nvidia released with 10GB and is not concerned with VRAM usage because of nvidia I/O for future games?
And yeah 10GB at the moment is fine imo. The cached debate is irrelevant to me tbh. The only metric that would affect my buying decision is if we see noticeable drop off in frame rates.
I can’t even remember the last time that’s happened.
BF5 uses more than 10gb of Vram @ 4k already
If those specs hold and we really get a 2.2 ghz boost clock then this thing should reach about 5-10% higher performance than 3080 and be within spitting distance of the 3090. I think that’s when NVIDIA will be forced to release 3080 20GB via their AIB partners similar to that 3060 KO which might feature more CUDA cores and higher factory OC.
If the above comes to pass, AMDs 16GB 80 CU monster will cost close to the same as a 3080, maybe even more if it exceeds 3080 performance while consuming less power—it will come down to whether AMD has decent RT performance and a DLSS equivalent. The 3080 20G will probably end up 5-10% slower than 3090 and cost $900-$1000+ so it will still be a slightly worse value vs AMD but will command the NVIDIA name and ecosystem as a reason for the premium pricing.
The big losers in all this will be the people who rushed out to buy a 3080 10GB, it’s just not a great card, especially relative to what’s potentially coming in the next few months. Definitely some crazy releases ahead of us to look forward to.
If 80 CUs, then that's 5120 shaders, and lets say 2200 MHz. That would result in...
2 * 5120 * 2200 * 10^(-3) = 22528.0 GFLOPs (Single Precision).
That's about 10% less than RTX 3080's single precision GFLOPs (25067.5).
That is correct.
However we don't know what arch changes will be made. Just like Nvidia, a Turning "CU" is not the same as an Ampere "CU", they doubled the number of CUDA cores per "CU", among other things.
BF5 uses more than 10gb of Vram @ 4k already
I was more hinting to AMD changing up what is contained in a CU. They could also double the amount of stream processors per "CU" like Ampere did. Or they could go from say 64 to 96.Absolutely. There os no comparing TFs across different architectures. Still, when a 5700xt matches a 2070 Super, it sure is promising.
I am just curious on what voodoo they are using for the bandwidth deficiency.
Is this going to beat the 3080?
If 80 CUs, then that's 5120 shaders, and lets say 2200 MHz. That would result in...
2 * 5120 * 2200 * 10^(-3) = 22528.0 GFLOPs (Single Precision).
That's about 10% less than RTX 3080's single precision GFLOPs (25067.5).
The RTX 3080's peak TFLOPs are unrealizable in a game. Half of them have to compete with integer ops. That's why the 3080 is only about 30% faster on average (best case, at 4K) than the 2080 Ti, which has the same number of shaders but only one FP32 per shader. The 3080 can only two two FP32's or one FP32 and one INT32. In a homogenous workload with only FP32, it will do much better. But in games, those extra FP32 opportunities will be mostly missed, due to the need to perform integer calculations as well. Probably more so in games that heavily utilize async compute, adding more integer to the mix. If you make a rough assumption that about a third of the extra FP32 units will actually get used in games, that puts the 3080's TFLOPS figure at more like 21, using the actual typical clock speed in games rather than the understated boost clock that nVidia publishes. If there's a Navi 21 card with 5120 shaders (with just one FP32 per shader) at 2.2GHz, it will very likely be faster than the 3080.If 80 CUs, then that's 5120 shaders, and lets say 2200 MHz. That would result in...
2 * 5120 * 2200 * 10^(-3) = 22528.0 GFLOPs (Single Precision).
That's about 10% less than RTX 3080's single precision GFLOPs (25067.5).
If you make a rough assumption that about a third of the extra FP32 units will actually get used in games, that puts the 3080's TFLOPS figure at more like 21, using the actual typical clock speed in games rather than the understated boost clock that nVidia publishes. If there's a Navi 21 card with 5120 shaders (with just one FP32 per shader) at 2.2GHz, it will very likely be faster than the 3080.
Beyond that, we know that AMD has two FP32 units per shader with CDNA, in the MI100 (42 TFLOPs with 120 CU's at about 1.33GHz).
Why are you reducing Ampere's flops but not Navi's. Where do you think Navi runs integer ops?
...
Not on mixed FP32/INT32 units.
If 80 CUs, then that's 5120 shaders, and lets say 2200 MHz. That would result in...
2 * 5120 * 2200 * 10^(-3) = 22528.0 GFLOPs (Single Precision).
That's about 10% less than RTX 3080's single precision GFLOPs (25067.5).
I think the Big Navi will come in above 3080 in raster perf. We should know very soon hopefully.Based on historical trends, AMD tends to be able to match the 2nd tier from the top Nvidia GPU. I think the new high end AMD will be sitting around $500 with performance slightly higher than 3070 but below 3080.
Based on historical trends, AMD tends to be able to match the 2nd tier from the top Nvidia GPU. I think the new high end AMD will be sitting around $500 with performance slightly higher than 3070 but below 3080.
I saw some news in the last month that Big Navi is targeting 2x 5700xt performance which is a huge improvement for AMD but that would place the chip at just above 2080 ti but below 3080. Not sure about the validity of the report but it sounds reasonable to achieve 2x the previous high end card. I find it hard to believe that AMD could achieve 2.5-3x the performance of 5700xt to match or exceed 3080.Well if AMD makes the the big chip, mathematically just matching 2080ti or little above 3070 doesn't make sense if you even take xbox series x as evidence. If they only go up to 60 CU I that might hold up but almost eveyrone is saying that AMD Is keeping big navi close to chest and AIB so far only know about cut down versions.
I saw some news in the last month that Big Navi is targeting 2x 5700xt performance which is a huge improvement for AMD but that would place the chip at just above 2080 ti but below 3080. Not sure about the validity of the report but it sounds reasonable to achieve 2x the previous high end card. I find it hard to believe that AMD could achieve 2.5-3x the performance of 5700xt to match or exceed 3080.
Well there is your problem. 3080 is not 2.5x faster than 5700xt. 2x 5700xt is way faster than 2080ti. Just look at average of 2080ti over 5700xt, its not even close to 2x the speed.
BF5 uses more than 10gb of Vram @ 4k already
Ironically, the 3080 comes in at exactly 2x 5700xt in the TPU review @ 4K.
https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/34.html
If 256bit is accurate I can't see it beating a 3080 unless AMD has some really cool compression/cache stuff going on. The 5700xt already appears bandwidth starved @ 40CU.
If 256bit is accurate I can't see it beating a 3080 unless AMD has some really cool compression/cache stuff going on. The 5700xt already appears bandwidth starved @ 40CU.
And considering they made an HBM Navi SKU for Apple, maybe they should have just done it again with Navi 2x. GDDR6 at 256bit seems like it must be a smaller 64CU card, I cant imagine an 80 CU card being fed so little memory bandwidth.
Read my comment again. I said its not 2.5x faster. And of course it's going to be 2x at 4k since 5700xt is probably hurting at 4k.
Huh? I wasn’t disagreeing with anything you said.