Nvidia would like to correct the record on VRAM it seems lol

staknhalo

Supreme [H]ardness
Joined
Jun 11, 2007
Messages
6,924
This article is so tone deaf, and full of techno-babble, the memes from AMD just write themselves. NVIDIA needs to get these game companies back in line with proper ports or buck up and spec their cards to match the competition who are brute forcing it.
 
Both RDNA 2 and Lovelace do show how good amount of cache can diminish the performance hit of lower bandwith for sure even if it seem to always catch you at some point (say 4k), but I am not sure if that where most of the wonder/issue was.
 
Last edited:
This article is so tone deaf, and full of techno-babble, the memes from AMD just write themselves. NVIDIA needs to get these game companies back in line with proper ports or buck up and spec their cards to match the competition who are brute forcing it.
Yup. Found myself getting bored with it and I am more techy than the lot.

MOAR VRAM GOOD. LESS BAD. READ HARD.
 
Just like the 4090 connector wasn’t effecting them much but it is still better to be ahead on the issue.
I wouldn't call this "getting ahead of the issue". The issue is already here. This article is not the correct way to go about rectifying the problem. It comes off as cope.
 
It didn't make my eyes gloss over, TLDR is 'faster and more efficient with more L2 means data moves through fast enough so we don't need as much, bandwidth 2x as fast (or something)'

But like said above I just found it funny they even felt the need to address it, didn't expect them to ¯\_(ツ)_/¯
 
Here is a summary of what I just read:
Nvidia will offer bigger vram cards in july and will cost more.
Nvidia blames developers for not making their game efficient to use less vram.
Developers are cutting costs by using shitter compression and hiring junior devs for engines they have little experience with.
NVidia isn’t wrong, but it doesn’t do anything to fix the issue either.

When various laws were passed limiting what they could do with lootboxes to make up their costs they were pretty clear they would have to cut costs somehow so they are just passing it on to us. Why spend a million on better jobs optimizing textures when they can tell us to buy more vram.
 
Developers are cutting costs by using shitter compression and hiring junior devs for engines they have little experience with.
NVidia isn’t wrong, but it doesn’t do anything to fix the issue either.

When various laws were passed limiting what they could do with lootboxes to make up their costs they were pretty clear they would have to cut costs somehow so they are just passing it on to us. Why spend a million on better jobs optimizing textures when they can tell us to buy more vram.

Even in the couple of Moore's Law is Dead podcasts I've seen where he has random industry game devs on and he's gonna have the devs 'tell the audience they just need more VRAM' it always just comes down to 'yeah we're just not gonna take the time to optimize it' as the 'but why'/justification, and no one for some reason mentions that I've noticed lol
 
Well, at least Nvidia clarified themselves "4070/4070Ti are 1440p HRR GPU's," so maybe these tech tubers can stop complaining about them not being good 4K cards... but I won't hold my breathe on that that.

They're not wrong either, I'd rather trust a company that's been making GPU's, with a lot of success, for over 25 years than I'd trust people on the internet with little to no experience in GPU design and maybe some experience in game development, especially when said company also provides tools free of charge, and features to help developers, showing they're not tone deaf when it comes to PC game development. I don't say this as a shill, I say this as someone who understands that Nvidia has access to all the same info we have, plus all the data they have, which is probably a whole hell of a lot more than anyone who claims to be an "expert" has.

Then again, it's the internet... it's fed by a combination of fear porn, speculation, buzz words, and echo chambers so people will probably write off this article as Nvidia propaganda and continue to blast them; nature of the beast I 'spose.
 
Well, at least Nvidia clarified themselves "4070/4070Ti are 1440p HRR GPU's," so maybe these tech tubers can stop complaining about them not being good 4K cards... but I won't hold my breathe on that that.

They're not wrong either, I'd rather trust a company that's been making GPU's, with a lot of success, for over 25 years than I'd trust people on the internet with little to no experience in GPU design and maybe some experience in game development, especially when said company also provides tools free of charge, and features to help developers, showing they're not tone deaf when it comes to PC game development. I don't say this as a shill, I say this as someone who understands that Nvidia has access to all the same info we have, plus all the data they have, which is probably a whole hell of a lot more than anyone who claims to be an "expert" has.

Then again, it's the internet... it's fed by a combination of fear porn, speculation, buzz words, and echo chambers so people will probably write off this article as Nvidia propaganda and continue to blast them; nature of the beast I 'spose.

I play at 1440 and have seen VRAM usage as high as 14GB in Hogwarts Legacy. Without RT.
 
seems about right from what i know of how vram and games work...
also re the vram, lots of games seem to allocate memory but not use it all


so maybe these tech tubers can stop complaining about them not being good 4K cards... but I won't hold my breathe on that that.
right!? ive said that over and over, not all cards are good for 4K, at least not without some trickery, yet everyone seems to expect it now....
 
right!? ive said that over and over, not all cards are good for 4K, at least not without some trickery, yet everyone seems to expect it now....
Who's saying anything about expecting a 4060ti to do 4k here? Let alone a 4070ti? Except that yes, it's disgusting seeing a 1440p card cost so much.
 
The trolls in some other threads keep going on about how a 3070 is a "high end" card, but only has 8Gb, so Nvidia did it on purpose to get you to buy a newer card. It can't run games at 4k Ultra settings (or maybe even certain games at 1440p at Ultra setting), so the sky is falling (again) and Nvidia made it fall.
Also Last of Us runs like shit on it, says HU, and they say it's because of the vRam being only 8Gb, so more bitching about Nvidia making people buy new cards already, or you should have bought AMD's card, or something.

Meanwhile tons of other games work perfectly on 8Gb vRam, and Last of Us (yet another shitty console port) has been patched and the issue is much improved with recent game patches.
It would be nice for HU to redo their testing with latest game patch for Last of Us, see if they can continue to back up their claims.

Oh and a 3070 is a high end card, Ultra settings 4Life!

https://feedback.naughtydog.com/hc/...Last-of-Us-Part-I-v1-0-5-0-Patch-Notes-for-PC
Patch Notes said:
Reduced the VRAM impact of texture quality settings, allowing most players to increase their texture quality settings or experience improved performance with their current settings
 
Who's saying anything about expecting a 4060ti to do 4k here? Let alone a 4070ti? Except that yes, it's disgusting seeing a 1440p card cost so much.
i was replying to a specific line, from another poster, in regards to all current gen cards and the expectation of 4k/ultra by every fucking tuber out there....
 
Both RDNA 2 and Lovelace do show how good amount of cache can diminish the performance hit of lower bandwith for sure even if it seem to always catch you at some point (say 4k), but I am not sure if that where most of the wonder/issue was.
100% and since it seems that cache scales better from a power and density then IO does they are trading a 32 mb cache and a 128 bit bus vs a 4 mb cache and a 256 bit bus.

The 3060 ti should still have been either a 16 gb only at $400 or a 192 bit bus with 12 gb for $400.

An 8 gb card that struggles with current games at 1080p for $400 is crazy.
 
Even in the couple of Moore's Law is Dead podcasts I've seen where he has random industry game devs on and he's gonna have the devs 'tell the audience they just need more VRAM' it always just comes down to 'yeah we're just not gonna take the time to optimize it' as the 'but why'/justification, and no one for some reason mentions that I've noticed lol
The PS5 has 16 gb so they reserve 12 gb for the GFX gram buffer. They will optimize it to keep it under 12 but will spend their finite time making the game better rather then optimizing to keep it under 8 gb and still looking the best. Instead they will just turn down the textures to potato quality to keep it under 8 gb.
 
seems about right from what i know of how vram and games work...

You say a lot of things and have repeated this ad nauseum for I don't know how long

You never explain why though. Could it be that there's something a little more complex going on and that it's actually useful to be able to fill VRAM with stuff you might need?

If a game has a feature where it can proactively load stuff into memory in anticipation of it being used leading to better performance, why would I not want to benefit from that? 20 years ago games wernt sophisticated enough to do this kinda stuff.

In games made today, they can. allocation and in use are metrics relevant only inside the game, the distinction between the two for an end user is pretty irrelevant when you're talking about performance

If it's allocated, it means that performance might benefit and I'm going to take that over not having the opportunity at all
 
You say a lot of things and have repeated this ad nauseum for I don't know how long

You never explain why though. Could it be that there's something a little more complex going on and that it's actually useful to be able to fill VRAM with stuff you might need?

If a game has a feature where it can proactively load stuff into memory in anticipation of it being used leading to better performance, why would I not want to benefit from that? 20 years ago games wernt sophisticated enough to do this kinda stuff.

In games made today, they can. allocation and in use are metrics relevant only inside the game, the distinction between the two for an end user is pretty irrelevant when you're talking about performance

If it's allocated, it means that performance might benefit and I'm going to take that over not having the opportunity at all
riiight...
its in the article. based on settings, game says i think "i might need up to this much vram, so ill allocate it". but that doesnt mean it will actually use all of that allocation. people see a game that allocates all of the 8GB available and assume that its ran out, but it might only be using 6gb. same game may allocate 12gb on a larger card, and still only use 6gb.
sure.



allocate
verb (used with object), al·lo·cat·ed, al·lo·cat·ing.
to set apart for a particular purpose; assign or allot:

"You say a lot of things and have repeated this ad nauseum for I don't know how long"
4 posts including this one...
1684454833721.png
 
riiight...
its in the article. based on settings, game says i think "i might need up to this much vram, so ill allocate it". but that doesnt mean it will actually use all of that allocation. people see a game that allocates all of the 8GB available and assume that its ran out, but it might only be using 6gb. same game may allocate 12gb on a larger card, and still only use 6gb.
sure.



allocate
verb (used with object), al·lo·cat·ed, al·lo·cat·ing.
to set apart for a particular purpose; assign or allot:

"You say a lot of things and have repeated this ad nauseum for I don't know how long"
4 posts including this one...
Every game engine on planet Earth allocates more than what is in use. No one gives a crap about VRAM unless performance tanks. Why would anyone care otherwise? The reason some might mention it is usually because a game has been tested before and VRAM limitations become obvious due to bad performance.
 
Every game engine on planet Earth allocates more than what is in use. No one gives a crap about VRAM unless performance tanks. Why would anyone care otherwise? The reason some might mention it is usually because a game has been tested before and VRAM limitations become obvious due to bad performance.
sure
 
Developers are cutting costs by using shitter compression and hiring junior devs for engines they have little experience with.
NVidia isn’t wrong, but it doesn’t do anything to fix the issue either.

Agreed, but it doesn’t matter. Devs don’t care that Nvidia wants to save a few bucks per GPU by under sizing VRAM capacity, nor do they care about their planned obsolescence strategies. They are designing for the market they perceive rightly or wrongly, and they’ve demonstrated they don’t really care about PC gamers. You have to react to the situation as it is, not as you wish it were.
 
"Don't believe your lying eyes." - Nvidia

Basically if you run out of VRAM, it's your own fault for trying to use a 1080p card at higher resolutions. Or it's the developer's fault for not optimizing their game.
But under no circumstances is it Nvidia's fault.
 
The trolls in some other threads keep going on about how a 3070 is a "high end" card, but only has 8Gb, so Nvidia did it on purpose to get you to buy a newer card. It can't run games at 4k Ultra settings (or maybe even certain games at 1440p at Ultra setting), so the sky is falling (again) and Nvidia made it fall.
Also Last of Us runs like shit on it, says HU, and they say it's because of the vRam being only 8Gb, so more bitching about Nvidia making people buy new cards already, or you should have bought AMD's card, or something.

Meanwhile tons of other games work perfectly on 8Gb vRam, and Last of Us (yet another shitty console port) has been patched and the issue is much improved with recent game patches.
It would be nice for HU to redo their testing with latest game patch for Last of Us, see if they can continue to back up their claims.

Oh and a 3070 is a high end card, Ultra settings 4Life!

https://feedback.naughtydog.com/hc/...Last-of-Us-Part-I-v1-0-5-0-Patch-Notes-for-PC

Help me understand why gamers are so determined to defend Nvidia on their objectively bad decision when it comes to VRAM capacity. I genuinely want to know.
 
riiight...
its in the article. based on settings, game says i think "i might need up to this much vram, so ill allocate it". but that doesnt mean it will actually use all of that allocation. people see a game that allocates all of the 8GB available and assume that its ran out, but it might only be using 6gb. same game may allocate 12gb on a larger card, and still only use 6gb.
sure.



allocate
verb (used with object), al·lo·cat·ed, al·lo·cat·ing.
to set apart for a particular purpose; assign or allot:

"You say a lot of things and have repeated this ad nauseum for I don't know how long"
4 posts including this one...
View attachment 571236

So all the developers in all of game development getting the biggest bucks in the industry are also the ones too stupid to intelligently allocate memory

riiight...

'i think I might need' is actually a benefit because its memory that would be otherwise unused. Games are doing this intelligently, it's not like they're filling it with garbage

How do you tell a card to use only the memory it needs? Do you whisper to it?

If you did and it could answer you it'd say 'i give the game whatever it wants'
 
  • Like
Reactions: kac77
like this
So all the developers in all of game development getting the biggest bucks in the industry are also the ones too stupid to intelligently allocate memory

riiight...

'i think I might need' is actually a benefit because its memory that would be otherwise unused. Games are doing this intelligently, it's not like they're filling it with garbage

How do you tell a card to use only the memory it needs? Do you whisper to it?

If you did and it could answer you it'd say 'i give the game whatever it wants'

I’m not sure what offends me more as a gamer. Poor optimization for developers, or Nvidia selling me a card with 12GB of VRAM on a 192-bit wide bus for $800, and telling me I should like it that way because it’s “optimized” for 1440p while also saying RTX is a critical feature I should pay for despite it benefitting from more VRAM.
 
Agreed, but it doesn’t matter. Devs don’t care that Nvidia wants to save a few bucks per GPU by under sizing VRAM capacity, nor do they care about their planned obsolescence strategies. They are designing for the market they perceive rightly or wrongly, and they’ve demonstrated they don’t really care about PC gamers. You have to react to the situation as it is, not as you wish it were.
Just as Nvidia doesn't care that developers want to shave hundreds of man-hours per week by choosing to use inferior cheaper toolsets or outsourced labor. It is a systemic problem.
If we had so much as a second player in the desktop GPU market then this could all have been avoided.
 
So who actually read the whole thing? I got to the second bullet point and when I realized this is reading like a cross between an ad and a wikipedia article just said fuck it and then scrolled to the bottom happy I didnt read it all
 
Just as Nvidia doesn't care that developers want to shave hundreds of man-hours per week by choosing to use inferior cheaper toolsets or outsourced labor. It is a systemic problem.
If we had so much as a second player in the desktop GPU market then this could all have been avoided.
Even Intel has 16GB on the A770 and it's basically a mid range card. How can two of the three GPU makers provide more RAM than one and it fall on "lazy developers" because they decided to move beyond 8GB after 8 years? John Carmack is the exception not the rule.
 
Would not surprise me one bit TBH
Yeah give an inch and they take a mile, when games were on floppy disks there was a give and take between coding efficiently and spending more money on floppy disks, same as things moved to CD rom, but when digital distribution came about all bets were off, 150 gigabytes no problem! We dont need to code efficiently
 
Even Intel has 16GB on the A770 and it's basically a mid range card. How can two of the three GPU makers provide more RAM than one and it fall on "lazy developers" because they decided to move beyond 8GB after 8 years? John Carmack is the exception not the rule.
Intel can put 16GB there because they want people to buy their card as a prosumer workstation card, Intel doesn't need to deal with product segmentation like AMD and Nvidia do so they can throw everything at the wall and see who buys it. They want adoption from users, consumers, and enterprises so they are trying to offer a low to mid-range for each, something both AMD and Nvidia have failed to do, that is their foot in the door.

The developers are busting their asses, laziness is not right, upper management is pinching them, so they can either pay an artist to create the asset or they can pay an artist to compress and tweak an asset so it works properly at the desired resolutions while looking good, but they can't do both because accounting already only approved $1200 for what should have been $3000.

If games were looking better at 1080p or 1440p with more assets and better quality textures that would be one thing, but if you look at the textures, and the assets, you will see the opposite, they have gotten worse and they are leveraging FSR and DLSS to cover the gap. Good Art departments are expensive, and making a texture work under high compression for 720p-4k and still look good takes talent, and talent you have to pay for, so a common trend right now is to use a 4K texture, use a canned compression and then let TAA/FSR/TLSS handle the rescaling where before there would have been a human working those files for the optimal results, and the canned approach does not perform nearly as well, but it cuts millions out of the art budget for a AAA title, and between memory efficiency and FPS, or a few million a year from the budget, I'll let you take a guess at which upper management is going to swing towards.
 
Does the GDDR6x vs regular GDDR6 also in part responsible, could be costlier and more power hungry as well, at least IIRC when Ampere launched that was in part the case.
 
Back
Top