AMD Plays the VRAM Card Against NVIDIA

Yes I went out pulled the current graphs. Like I said I was previously looking at 6950xt charts and the 4070 charts at the same time. Is one not supposed to correct themselves?

I didn't say anything about AMD or NV being perfect or not perfect. The 7900 is meant to compete with the 40xx where it's over double the usage. Saying that it's meant to compete with the 30xx series is ridiculous.

You are cherry picking here though. Lets be honest... NO one and I mean NO one has ever graphed multi monitor idle power usage until AMD had a launch driver hiccup for the 7900. lol I have never ever ever seen anyone graph power usage with 2 monitors. If they had people would have been all over Nvidia with the 3000s having 2x the power draw over the 2000s with two monitors.

Its a non issue... that got blown up by people looking for reasons to shit on AMD when it looked like they launched a card that might show Nvidia up a bit. Now AMD being AMD they shot themselves in the foot out of the gate with some driver bugs, and a gen 1 chiplet that while good isn't quite as good as it should or could have been perhaps.

And you are correct... I shouldn't have taken a shot at you for correcting yourself. People made it a big story at launch and its not like they all jumped up and down when AMD fixed it. Mea culpa, I apologize. :)
 
Holy shit, I thought I was the only one to notice and roll my eyes at AMDs chart. What the fuck were they smoking when they made it?View attachment 564027
View attachment 564028View attachment 564029View attachment 564030View attachment 564031View attachment 564032View attachment 564033View attachment 564034View attachment 564035

And those are just the non-RT benches they are lying about. The RT benches they put in their chart as winning are just a disgusting straight out lie! Even when the 7900XT keeps up, it's margin of error between the two, and yet they have games on the chart stating they are up to 23% ahead of the 4080 in RT.

View attachment 564036

How is this not bothering anyone else? Had it between any of their competitors, this forum would be out for blood. :coffee:
Lol. It's marketing person doing their job.
But yeah, they show RT or non-RT, but no resolution, whether or not DLSS/FSR is on, etc. Chances are they have FSR on for the AMD GPU's and DLSS off on the nVidia ones. That might even be a slide meant for a board presentation.. this kind of stuff happens all the time and at pretty much every company. But yeah, at [H] it looks like a dual-standard, any thread where the nVidia marketing leaves information out, 30 pages long....

Meanwhile plenty of people are still using a 1660Ti with 6Gb vram, sky is still falling for them, should land sometime. It's falling really slowly maybe.

But I kind of like the push against lower vram quantities... it will play out in 3 or 4 years, those 4060's will be right in the 1660Ti's slot. Likely playing just fine, but not at 4k. Not marketed for 4k as far as I know.
For now lets ignore that the card is aimed at the lower end price bracket, make a youtube video where 1 poorly ported game plays badly with 8Gb vram, and proclaim 8Gb isn't enough vRam (see HU and Last of Us).
I suppose if the 4060 isn't under $500 that will suck for those shopping with $ at the forefront. But those are not 4k cards.

The 4080 16Gb is enough for 4k for any decently coded game/engine. Last of Us =! well coded. And even 8Gb still works, but I wouldn't call those 4k cards.

And if you are an AMD fan none of this matters, and Marketing = sale/feelings of wellbeing.
 
Holy shit, I thought I was the only one to notice and roll my eyes at AMDs chart. What the fuck were they smoking when they made it?View attachment 564027
View attachment 564028View attachment 564029View attachment 564030View attachment 564031View attachment 564032View attachment 564033View attachment 564034View attachment 564035

And those are just the non-RT benches they are lying about. The RT benches they put in their chart as winning are just a disgusting straight out lie! Even when the 7900XT keeps up, it's margin of error between the two, and yet they have games on the chart stating they are up to 23% ahead of the 4080 in RT.

View attachment 564036

How is this not bothering anyone else? Had it between any of their competitors, this forum would be out for blood. :coffee:
I'll have you know that AMD is all about helping us gamers and isn't evil like other companies. /s

Just goes to show that corporations are going to behave in whatever way they want to as long as it makes them more and more money. Vote with your wallet, but don't ascribe some greater morality to any of them.
 
I'll have you know that AMD is all about helping us gamers and isn't evil like other companies. /s

Just goes to show that corporations are going to behave in whatever way they want to as long as it makes them more and more money. Vote with your wallet, but don't ascribe some greater morality to any of them.
Exactly this. AMD is no better than Nvidia. AMD lied about the performance of the 7900XTX/XT (Even reviewers called them out on it).

Both and Nvidia aren't friends to us customers. They are in the business to make money.
 
Not really. Modern nvme systems are cold booting to desktop in a few seconds. If you're not actively using the machine why are you leaving it on? Do you leave all of your lights on on the house too?

My lights don't have context that I lose when rebooting them. I also don't do remote login into my lights.

Sleep would be an alternative on my Windows computer if wake-on-lan would work better.

Or to look at it from the other end: as long as you don't have hardware with driver mistakes that make it idle on increased power it is pretty cheap to "run" 24/7.
 
My lights don't have context that I lose when rebooting them. I also don't do remote login into my lights.

Sleep would be an alternative on my Windows computer if wake-on-lan would work better.

Or to look at it from the other end: as long as you don't have hardware with driver mistakes that make it idle on increased power it is pretty cheap to "run" 24/7.
That and low power states on modern systems are close to nothing. I have porch lights that consume more juice. Wake on Lan can be your friend.
 
Still 77w on my 7900 XTX, XFX 310 Merc, on a single 4K HDMI 2.1 display, LG C42. Newest drivers. Maybe a firmware thing here, who knows. Do I care? Not really.

As for AMD benchmarks, maybe a few grains of pepper with salt is in order. Anyways their drivers have been improving performance significantly since launch so unless TechPowerUp redo their testing, I don't recommend using older data at this point.

As for RT performance, look at Hardware Unbox 4070 review, most of the newer games with RT have the 7900XT doing much better than even the 4070 Ti. Older games are designed around Nvidia hardware due to the delay of AMD RT hardware. Newer titles, current generation console and RNDA2 available hardware development period does much better, which RNDA 3 can also take advantage of. Once RNDA 3 RT aware titles hit, I expect a further improvement for RT performance.
 
But yeah, they show RT or non-RT, but no resolution, whether or not DLSS/FSR is on, etc. Chances are they have FSR on for the AMD GPU's and DLSS off on the nVidia ones.
But...the chart literally says "native 4k". It's at the top. 4k is your res, and native means no upscaling tech. It's fine to call out BS marketing, but that assumption is just wrong. Am I missing something here, or are people just not reading? Some of the numbers line up with the TPU 4k numbers, some are under, some are above. The actual average is a percent off from TPU.
 
I havn't had the little RX6600 installed since this video, adding the extra 16Gb of system memory helped me page file so much better, 4K settings @ 1:37 mark.



He is my RTX 3070 playing in 1440p

 
Last edited:
Any game that would hit anywhere close to 16GB.. the 6800XT and 6950XT would shit the bed. Anything close to hitting the 7900XT's VRAM limit.. the 7900XT, and XTX would shit the bed--only way to get that high is with mods, Ultra settings, RT on, and using 4K.
 
Any game that would hit anywhere close to 16GB.. the 6800XT and 6950XT would shit the bed. Anything close to hitting the 7900XT's VRAM limit.. the 7900XT, and XTX would shit the bed--only way to get that high is with mods, Ultra settings, RT on, and using 4K.
so you're saying if you like to mod or run higher graphics settings and/or resolutions it's better to have spare vRam? i mean, running you ram to the extreme capacity limit is probably as good as treating your hard drive the same way, don't cha think?

heck, before gamers nexus and the rest of youtube click baiters became a thing i always remember overbuilding gaming pc's to be a thing. nowadays they keep pushing bare minimum to get someone up and running. like 4-6 core processors long after PS5 XBX launched. but i'm sure the parts companies love that. especially when they're charging what they are now.
 
Last edited:
Any game that would hit anywhere close to 16GB.. the 6800XT and 6950XT would shit the bed. Anything close to hitting the 7900XT's VRAM limit.. the 7900XT, and XTX would shit the bed--only way to get that high is with mods, Ultra settings, RT on, and using 4K.
Yeah I don't think so, I've had 12-14gb used in RE 4 on mine and it's fine. Texture decompression mostly takes up cpu cycles. It's okay you can shit on AMD for their subpar software and nvidia for being completely driven by greed these days.
 
so you're saying if you like to mod or run higher graphics settings and/or resolutions it's better to have spare vRam? i mean, running you ram to the extreme capacity limit is probably as good as treating your hard drive the same way, don't cha think?

heck, before gamers nexus and the rest of youtube click baiters became a thing i always remember overbuilding gaming pc's to be a thing. nowadays they keep pushing bare minimum to get someone up and running. like 4-6 core processors long after PS5 XBX launched. but i'm sure the parts companies love that. especially when they're charging what they are now.
When games actually use, not allocate, anywhere close to what the 6800XT, 6950XT, 7900XT and XT have, yes they will shit the bed. In fact, no GPU available now would be able to run a game that’s actually using 16GB+ of VRAM considering what it would take to get there.

I think there’s only one or two games that naturally use anything north of 12-13GB and that’s at 4K with RT and all that. Most AAA games are using between 7-10GB fully maxed graphics settings.

Hell, CP2077 with PT at 4K native uses 13-14 GB at Native 4K and no GPU can run it at Native, let alone with just DLSS or FSR, which drops its VRAM usage, so no amount of VRAM can save you there.

So, what exactly would be considered minimum? If current high end GPU’s aren’t capable of running a game that naturally uses >13GB then is 12GB really the minimum? If games aren’t using more than just a handful of cores, are 6-8 core CPU’s really bare minimum? It’s all about raw processing power on both the CPU and GPU, not VRAM or core count.
Yeah I don't think so, I've had 12-14gb used in RE 4 on mine and it's fine. Texture decompression mostly takes up cpu cycles. It's okay you can shit on AMD for their subpar software and nvidia for being completely driven by greed these days.
RE4 doesn’t actually use 12-14GB, it’s more along the lines of 9-10GB of actual usage, hence why even a 4070Ti can get between 70-80 FPS at 4K with RT on with 1% lows in the high 60’s to low 70’s.
 
RE4 doesn’t actually use 12-14GB, it’s more along the lines of 9-10GB of actual usage, hence why even a 4070Ti can get between 70-80 FPS at 4K with RT on with 1% lows in the high 60’s to low 70’s.
I've seen it use more for me, seems odd, maybe I'm thinking of TLOU. Either way, I'd stay far away from anything under 16 gb for 1440+ if I don't plan on upgrading in the next 3-4 years. I think 12 will be enough at 1440p for the next two or so. Either way, the 6800 XT isn't going to choke on texture data.
 
Always better to have too much than not enough vram. Common sense. RT and FG to the rescue, right? Not really.
Well, more VRAM doesn’t hurt considering the more you have the wider the bus.

My argument is any game that actually uses anything close to 16GB+ of VRAM they’ll choke. Just look at Cyberpunk 2077 with PT on, it uses something like 13-14GB at 4K native and no GPU can run it at native.

Most AAA games only use between 7-10GB, and maybe up to 12GB in niche cases, so imagine how many graphics mods it would take alone with running the game at Ultra 4K or higher, and most likely with some form of RT on to hit an actual 16GB or more used, not allocated, on any other game—every GPU would choke.

As I said in my post above—it’s about processing power. The GPU needs to have the processing power to handle higher VRAM usage. The only benefit of having more VRAM is memory bus width, outside of that it’s gimmicky 90% of the time, and in the case of the 4090, the other 10% would be productivity uses, which AMD isn’t all that great at.
 
Well, more VRAM doesn’t hurt considering the more you have the wider the bus.

My argument is any game that actually uses anything close to 16GB+ of VRAM they’ll choke. Just look at Cyberpunk 2077 with PT on, it uses something like 13-14GB at 4K native and no GPU can run it at native.

Most AAA games only use between 7-10GB, and maybe up to 12GB in niche cases, so imagine how many graphics mods it would take alone with running the game at Ultra 4K or higher, and most likely with some form of RT on to hit an actual 16GB or more used, not allocated, on any other game—every GPU would choke.

As I said in my post above—it’s about processing power. The GPU needs to have the processing power to handle higher VRAM usage. The only benefit of having more VRAM is memory bus width, outside of that it’s gimmicky 90% of the time, and in the case of the 4090, the other 10% would be productivity uses, which AMD isn’t all that great at.
What you're saying doesn't make any sense, if a game uses 4k textures and requires 12-13 gb of vram it's not going to take GPU power to deal with that currently, maybe if we're using direct storage. You're talking about RT, the problem isn't the bus width or vram capacity for AMD. The 4070 ti runs RT better on less vram and a smaller bus than a 6800 XT. If we were to load enough texture data on to it using CPU decompression like we do now, and it were to exceed the 12 gig limit, it would have bad 1 percent lows. Rendering textures is never going to make the card choke, just like BVH data wouldn't either it has that capacity in spades, it just doesn't have proper RT cores. You're conflating one problem with another and effectively saying "well hey it doesn't need the memory anyway because it's not strong enough in this one use case example". Sorry, but that example isn't the only thing Vram is used for, texture resolution isn't a GPU processing problem. As long as current gen consoles don't really support RT, it's going to have limits to its use cases. But, as we move further away from cross gen releases, things like native HW texture decompression on consoles not being part of the PC platform will be an issue.
 
I haven’t wanted more rasterization since the 2080ti.

I’d take a 4070ti for DLSS / RT every day of the week.
 
I've seen it use more for me, seems odd, maybe I'm thinking of TLOU. Either way, I'd stay far away from anything under 16 gb for 1440+ if I don't plan on upgrading in the next 3-4 years. I think 12 will be enough at 1440p for the next two or so. Either way, the 6800 XT isn't going to choke on texture data.
10 and up should be good until 2027 for any title that gets a console launch, high end PC exclusives could manage to do more but if they do they are going in knowing some 60% of gaming PC’s won’t meet the requirements, so… They are severely limiting the market they can sell to which given the cost would be unappealing.
 
10 and up should be good until 2027 for any title that gets a console launch, high end PC exclusives could manage to do more but if they do they are going in knowing some 60% of gaming PC’s won’t meet the requirements, so… They are severely limiting the market they can sell to which given the cost would be unappealing.
I would be surprised by that, I think people are underestimating what these consoles can do in terms of handling large amounts of texture data. Hopefully we get direct storage in every AAA from now on and it won't be an issue.
 
I would be surprised by that, I think people are underestimating what these consoles can do in terms of handling large amounts of texture data. Hopefully we get direct storage in every AAA from now on and it won't be an issue.
We should, it has been built into so many API's at this stage that not including it is straight-up lazy.
 
I would be surprised by that, I think people are underestimating what these consoles can do in terms of handling large amounts of texture data. Hopefully we get direct storage in every AAA from now on and it won't be an issue.
I run my games on my M2 drive and it's sweet on B550 chipset with X Box Game Bar for direct storage, as for the games that do support it, then we have Rebar /Sam also, the cpu just gets feed so much better these days with PCI Express 4 and 5, 32Gb in all dimms is a must have!
 
not to mention them neutering the memory bus which gives the 4070ti, which was suppose to be the 4080 half the bandwidth of it's big brother and it rolls down hill from there.

View attachment 563782
View attachment 563783

look at that 4060ti... OUCH!!! i'm getting 384.6 GB/s on a 980ti from 2014! *edit: and they branding it RTX like you'd really be able to run ray tracing with that... prob got 4k plastered all over the box too

how much you want to bet this is gonna be one of those series that sticks around for a loooonng time. because they'll do a "super" refresh, or whatever they're gonna call it this time, with slightly higher clocks and upgraded memory specs that we should have gotten out of the gate?
288 GB/S is the exact same as the 1660ti. That card fell behind to the 1660 Super in some cases, which had more bandwidth and less gpu power. Now they are trying to use that same bandwidth in a much more powerful gpu. Massive L2 will not save the day with every case.

This card will get spanked by the 12 GB 6700xt.
 
Honestly, 8 GB is probably enough for a card with half the gpu power as the 4080. Very minor adjustments can be made for massive vram savings. However, the 4060ti has closer to 1/3 the bandwidth of the 4080.

There is really NO getting over the bandwidth wall. It's why the 6 tflop 6500xt gets spanked by the 5 tflop 5500xt which has 50% more bandwidth while the 9 tflop 6600 has no issues beating the 7 flop 5600xt which has a 25% bandwidth advantage and probably an excess amount at that.
 
When games actually use, not allocate, anywhere close to what the 6800XT, 6950XT, 7900XT and XT have, yes they will shit the bed.
During the open beta, it looked like Diablo IV was doing that, although because of a bug causing VRAM leaks. I haven't paid attention to the aftermath, so it may have turned out to be something else. But I recall seeing performance fall off a cliff, and VRAM being more or less 100% full, plus spiling into system RAM (and CPU/GPU usage not near 100%).

Obviously that's not the same as a game designed to use that much.
 
Honestly, 8 GB is probably enough for a card with half the gpu power as the 4080. Very minor adjustments can be made for massive vram savings. However, the 4060ti has closer to 1/3 the bandwidth of the 4080.

There is really NO getting over the bandwidth wall. It's why the 6 tflop 6500xt gets spanked by the 5 tflop 5500xt which has 50% more bandwidth while the 9 tflop 6600 has no issues beating the 7 flop 5600xt which has a 25% bandwidth advantage and probably an excess amount at that.

Guess that massive cache wasn't s effective after all..
 
Guess that massive cache wasn't s effective after all..

Nvidia: majority of gamers have 8gb graphics cards. So it is the responsibility of game developers to optimize for the same.

Hardware Unboxed disagrees. Shows many games struggling at 1080p ultra (timestamped below) & in one case, even 1080p high is a struggle


 
Newer games bottlenecked by 8gb vram in 7600 (in comparison to 12gb 6700xt)

 
You are cherry picking here though. Lets be honest... NO one and I mean NO one has ever graphed multi monitor idle power usage until AMD had a launch driver hiccup for the 7900.
Was there ever card that launched with a 100watt type of consumption at idle on multi-monitor before ?

Yes the with the fix level is not an issue, 40w or 20w even in Germany not a big deal.

Would I have a company with 40 deskptop in multimonitor 50 hours a week with a 75 watt difference in a place with high energy cost, would they never fix it could show up. Even a single user in place with 30 cent, work from home over 3 years in some place thats $150-$200 of a difference, not that big of a deal but sometime the price difference between 2 options.
 
if you adjust your settings to an appropriate level its not an issue.
True. The point is performance of 7600 improves (at high settings) if it has 4gb more vram.

So AMD could release an overclocked (double TDP?) 7600xt with 16gb vram & that would make more sense for 1080p ultra gamers / RT gamers / 4k low-medium gamers etc.
 
Newer games bottlenecked by 8gb vram in 7600 (in comparison to 12gb 6700xt)
Interesting in theoretical work (specially for people that aim 60 and not 70-90 fps when they game) and to take into consideration, but in the first example even a 6700xt would barely be over 60fps.

A 7600 with 24gb of vram would not stay above 60, not really an ultra setting with RT on card for already released game regardless of the vram amount.

One way to think about it, if your ultra setting run well on a xx60 type card, you should probably let the player be able to have better than that or it waste all the higher sku and the future not released one. Why if I make a game would I not made ultra setting that at least push a 3080 rtx.....

A 3060 was not doing 30fps well playing Cyberpunk 2077 with RT on (do not try with a 6600), it is really not new for that class of card to not play a recent game at ultra a 3080TI was barely playing Cyberpunk 1080p ultra with RT on.

A 1060 6gb on launch was not playing everything at 1080p neither (even at a low 60fps is ok bar and average to be very low not 1% minimum) and Pascal was seen as a golden generation, 1070 or higher needed for Assassin's Creed, Crysis, Far Cry and what not to be comfortable with the max settings.
 
Last edited:
Was there ever card that launched with a 100watt type of consumption at idle on multi-monitor before ?

Yes the with the fix level is not an issue, 40w or 20w even in Germany not a big deal.

Would I have a company with 40 deskptop in multimonitor 50 hours a week with a 75 watt difference in a place with high energy cost, would they never fix it could show up. Even a single user in place with 30 cent, work from home over 3 years in some place thats $150-$200 of a difference, not that big of a deal but sometime the price difference between 2 options.
Short answer. NO
Long answer... YES

Just do a quick google you'll find all sorts of Nvidia users asking.... is it normal for my 3090 to idle at 80 watts? When they describe their multi monitor setup... the response. That is normal.

Every card draws more power when it idles more pixels. The higher the res the higher the power draw. Wide monitor and dual monitor users have always know that. Granted with most GPUs if they idle at 15 watts they jump into the 25 watt type range. But when we are talking about cards like the 3090 that idled at 40 watts... they jumped up to the 60-80 watt range.

So yes the 7900 XTX was the first card anyone noticed breaking 100 watts idling with 2 4k monitors. BUT it was in no way the first flagship to draw twice as much power when doing so. After AMD patched up their driver on that score the power usage is right around the same as a 3090 with two screens. Maybe a bit better actually. So yes it was a driver issue, but people freaking out about it was in fact pretty damn laughable. I would suggest anyone that has used more then one monitor for a long time understood idle power draw is higher how can it not be.
 
  • Like
Reactions: kac77
like this
Just do a quick google you'll find all sorts of Nvidia users asking.... is it normal for my 3090 to idle at 80 watts? When they describe their multi monitor setup... the response. That is normal.
Well if some model-driver-os monitors situation did it, that different, in the review it was a sub 30 watt multimonitor and when they ask Nvidia not random people they do not get it is normal answer:
https://forums.developer.nvidia.com...r-draw-is-astronomical-with-rtx-3090/155632/3
but a :
We have root caused the issue, fix will be incorporated in future driver releases.

And AMD also seem to have addressed it, in the 7600 review the 7900xtx is down do 48w and the xt to 40w (i.e. it was a bug, it seem).

Fully take the point that it happen more that I knew.
 
  • Like
Reactions: ChadD
like this
Well if some model-driver-os monitors situation did it, that different, in the review it was a sub 30 watt multimonitor and when they ask Nvidia not random people they do not get it is normal answer:
https://forums.developer.nvidia.com...r-draw-is-astronomical-with-rtx-3090/155632/3
but a :
We have root caused the issue, fix will be incorporated in future driver releases.

And AMD also seem to have addressed it, in the 7600 review the 7900xtx is down do 48w and the xt to 40w (i.e. it was a bug, it seem).

Fully take the point that it happen more that I knew.
I can't think of a specific example. But I know it was something people noticed going way way back to like matrox type days even. No doubt your correct thought that the crazy doublings can be avoided with proper driver code. I stipulate though that when it breaks 100 it becomes very noticeable. lol
 
This is still happening:

When VRAM-limited, perf of RTX 4060 & 4060 Ti is effectively identical (usually +17% at 4K).
At same time, perf gap to SKUs with sufficient VRAM explodes.

At 4K, 4060Ti should normally reach ~68% of 4070's perf level.
In this example, it's only ~35%.

Fzwf3TVaMAAQuYZ.jpeg


https://twitter.com/3DCenter_org/status/1674266691461865472?s=20

https://www-forum--3dcenter-org.tra...=auto&_x_tr_tl=en&_x_tr_hl=en-GB#post13338406
 
what fps would a 80 gigvram 4060 do in Cyberpunk with RT overdrive on in a test called GPU worstcase reflexion ?

It was never in question that many scenario vram limit will hurt you, the question are they relevant (are they would have been 70fps or below 45 anyway) and how costly are the compromise to make the game work.
 
Back
Top