Would you choose a 12GB or a 24GB RTX 3090?

12GB 3090 or 24 GB 3090?

  • 12 GB for $1200

    Votes: 70 52.6%
  • 24 GB for $1600

    Votes: 36 27.1%
  • How dare you suggest 12 more GB of GDDR6X isn't FREE

    Votes: 27 20.3%

  • Total voters
    133
  • Poll closed .
I think 16 is more then enough for gaming. Mid range 8GB I guess. But after 16GB I really don't think it is needed for gaming.
 
I've yet to run any game with my 2080ti @ 4k native res that even comes near using the 12GB. Some games like RE2 claim to use a lot, but i've never seen it. In reality @ 4k most games are using around 6-8GB tops.

If I were buying another card now and I had to choose between 24GB or 12GB, I would save the money. I just don't see most games using more than 12GB for at least another 3-4 years, unless you planned on running an 8K display.
 
I still have yet to max out my launch 1080 at 8GB so I'm sure 12GB will be plenty for the next 4 or so years.

I routinely max out 8GB @ 3440x1440. 4K has obviously no trouble either. What resolution are you playing at?
 
Where are you getting this price out of? That the 2080Ti replacement will be more than $1200??

Most people expect the RTX 3090 will cost over $1200, and given history, that seems VERY likely.

At minimum we are looking the new Monster top end card with at minimum 12GB of GDDR6X, a brand new (read expensive) super fast memory, potentially delivering about 60% improvement in Memory bandwidth.

That potential 60% increase in bandwidth might be some clue on how big a performance jump this card will be over the 2080ti.

This card will be a Monster, and NVidia will most likely look to cash in on that.
 
This poll is useless. It uses made up dollar amounts and puts them to technically unconfirmed products. Might as well ask how much someone would be willing to pay for a meet and greet with Bigfoot.

I like the other "guess the price" poll better.
 
A better poll would be $1200 for a 12 GB 3090 (384 bit) vs the same price for a 16 GB 3080 (256 bit, less cores)
 
Generally it gets ugly if you want uneven amounts of memory on your Memory bus. So typically you want 1 (or maybe 2) chips for each 32 bit memory channel. The 3090 has 384 bit bus, which leads to either 12GB or 24GB.

For the cards based on the same GPU chip with some disabled units will have either a 320 (10GB or 20GB) or 352 bit bus ( 11GB or 22GB).

You would need to step down to a 256 Bit bus to have 16GB card, or step up to a 512 bit bus, but that isn't happening with Ampere.

Memory bandwidth is becoming a bigger problem every generation. The 980ti was 6GB 384 bit at 336 GB/s. The 1080ti was 11GB 352 bit at 484 GB/s. The 2080ti was 11GB 352 bit at 616 GB/s.

The 3090 wil again have to be 352 or 384 bit as GDDR6x is just not fast enough to compensate for 256 bit bus.

The only realistic solution, other than a monstrous 24 GB pool, is to use a hybrid solution akin to the Series X. Not all textures would need the insane speeds so you could have something like 11GB running at 352 bit gddr6x (850GB/s?) and 4 of the chips using 2 GB Modules giving you an extra 4GB at 128 bit or 300 GB/s for a total of 15 GB.
 
The only realistic solution, other than a monstrous 24 GB pool, is to use a hybrid solution akin to the Series X. Not all textures would need the insane speeds so you could have something like 11GB running at 352 bit gddr6x (850GB/s?) and 4 of the chips using 2 GB Modules giving you an extra 4GB at 128 bit or 300 GB/s for a total of 15 GB.

Yeah, that's what I mean by getting ugly.

Microsoft wanted more bandwidth than 256 bit bus would deliver, but they also wanted 16GB of RAM and didn't want a 512 bit bus. So they chose a messy compromise.

Though since it's an APU, it's a lot easier to find non-GPU housekeeping tasks for the second class memory, while dedicating the 10GB to pure GPU usage.
 
Yeah, that's what I mean by getting ugly.

Microsoft wanted more bandwidth than 256 bit bus would deliver, but they also wanted 16GB of RAM and didn't want a 512 bit bus. So they chose a messy compromise.

Though since it's an APU, it's a lot easier to find non-GPU housekeeping tasks for the second class memory, while dedicating the 10GB to pure GPU usage.

Does anyone here actually know the reason that you can't have a 512 bit bus with GDDR6, other than cost (which is laughable on a $1200 card)?
512 bit would allow you to have enough bandwidth using even the cheapest GDDR6 modules while having the "perfect" 16 GB vram pool.
 
Are you saying 3080ti equivalent, since you can just claim the 3090 is not a 3080 Ti equivalent, when it comes out with better than 30% boost?
If 3090 is Titan then 3080 (ti) will be eqivivalent of 2080 ti, so I suppose that 3080 (ti) to be ~30% over 2080 ti.
 
Neither of the prices are reasonable, wasnt possible to vote on that basis.
If they are decent prices at launch I will purchase the top card.
If not I'll likely play games 2 years behind release if needed and buy 2nd hand hardware.
I can easily afford what ever I like but ridiculous prices for cards every 2 years is not something I am going to engage with and will take a stand against.

If NVidia dont want to reduce interest in the PC gaming market, they had better be sensible.
 
Does anyone here actually know the reason that you can't have a 512 bit bus with GDDR6, other than cost (which is laughable on a $1200 card)?
512 bit would allow you to have enough bandwidth using even the cheapest GDDR6 modules while having the "perfect" 16 GB vram pool.

I expect it's very difficult to route all the channels, which makes the board more expensive/difficult to make, and maybe more signal integrity issues.
 
Some people do other things on top of gaming.

point stands. Yea a gaming card first. I know some people do more than gaming but there are cards for that will do gaming and more pro oriented setup.
 
I expect it's very difficult to route all the channels, which makes the board more expensive/difficult to make, and maybe more signal integrity issues.

It seemed like it was done relatively cheap with the RX 390. So really there are 3 ways (other than HBM) to get over 12 GB with enough bandwidth:
A. 24 GB of GDDR6x using 384 bit bus
B. 16 GB of GDDR6x Hybrid akin to the Series X
C. 16 GB of cheap GDDR6 using a 512 bit bus

Option C seems like it would be cheaper than option A, while being easier to develop for than option B.
 
It seemed like it was done relatively cheap with the RX 390. So really there are 3 ways (other than HBM) to get over 12 GB with enough bandwidth:
A. 24 GB of GDDR6x using 384 bit bus
B. 16 GB of GDDR6x Hybrid akin to the Series X
C. 16 GB of cheap GDDR6 using a 512 bit bus

Option C seems like it would be cheaper than option A, while being easier to develop for than option B.

But you really don't to go over 12GB, so:

Option D: 12GB of GDDR6X is even cheaper and delivers the same bandwidth.

Option A: remains as a high profit margin option for those willing to pay through the nose for it. It doesn't cost NVidia more to offer this option, it makes them more money. While still having cheaper boards, and cheaper chip (less memory controllers).
 
Last edited:
point stands. Yea a gaming card first. I know some people do more than gaming but there are cards for that will do gaming and more pro oriented setup.
Yeah ... a gaming card with more VRAM.
 
It seemed like it was done relatively cheap with the RX 390. So really there are 3 ways (other than HBM) to get over 12 GB with enough bandwidth:
A. 24 GB of GDDR6x using 384 bit bus
B. 16 GB of GDDR6x Hybrid akin to the Series X
C. 16 GB of cheap GDDR6 using a 512 bit bus

Option C seems like it would be cheaper than option A, while being easier to develop for than option B.

Doesn't a 512 bit bus also require more die space, and thus drive up costs on top of the cost of the extra RAM?
 
Neither of the prices are reasonable, wasnt possible to vote on that basis.
If they are decent prices at launch I will purchase the top card.
If not I'll likely play games 2 years behind release if needed and buy 2nd hand hardware.
I can easily afford what ever I like but ridiculous prices for cards every 2 years is not something I am going to engage with and will take a stand against.

If NVidia dont want to reduce interest in the PC gaming market, they had better be sensible.

LOL. Halo cards being expensive isn't going to reduce PC gaming interest. If PC gamers think they deserve a Halo card at their price then the problem isn't NV.
 
Seeing as my 1080 Ti has 11GB. 12GB wouldn't be much of an upgrade. So it looks like 24GB for me.
 
Doesn't a 512 bit bus also require more die space, and thus drive up costs on top of the cost of the extra RAM?

I am just basing this off OP's cost estimate of GDDR6X being $33/GB or $800 total for 24 GB. I would imagine run of the mill 14 GB/s GDDR6 would be under $20/GB or about $300 for 16 GB. That's about $500 of wiggle room to run the extra paths. Yeah I know it was GDDR5, but AMD sold an entire 512 bit card for like $300.

I agree that 'option D' is the best but I am just giving what if scenarios for pushing past 12 GB.
 
LOL. Halo cards being expensive isn't going to reduce PC gaming interest. If PC gamers think they deserve a Halo card at their price then the problem isn't NV.
It will drive many to console or not bother with AAA games until their next upgrade in a few years.
Its nothing to do with deserve, its pricing getting out of hand.
Its a hell of a lot cheaper purchasing everything later.
 
Doesn't a 512 bit bus also require more die space, and thus drive up costs on top of the cost of the extra RAM?

Yes. More die space for the wider memory controller, more connection space. Then more complex board. That's why it's a last resort.
 
It will drive many to console or not bother with AAA games until their next upgrade in a few years.
Its nothing to do with deserve, its pricing getting out of hand.

Yeah, the Aston Martin Valkyrie is 3 Millions dollars.

That's why I am switching to a bicycle to get around. :D
 
Yeah, the Aston Martin Valkyrie is 3 Millions dollars.

That's why I am switching to a bicycle to get around. :D

I don’t think that analogy applies here since consoles determine how the next generation of games will look and play. The 3090 might run them at higher resolution and framerate but will it be enough to warrant such a huge cost disparity? Most likely not for a lot of enthusiasts, especially in a dying economy.

Then again there are fools out there that toss thousands at streamers so either they are the 1% or mentally ill (or both) so nvidia could be targeting them—in which case it should price 3090 at $2500 and package it with Dr Disrespects face on the box to help sell it at a real premium.

I know that for me, spending money is usually done sensibly so if this thing really is at a premium, I’ll just toss that money into nvidia and AMD stock as usual and profit off the guys making these purchases.
 
Last edited:
Meh, I'll buy what I want. PC gaming is a cheap hobby. No big deal to throw a couple thousand a year at it. I spend more than that on ammo every year.
 
To ask this question as with a poll means you have no idea what framebuffer is for or what your needs are.

Games only 8gb is sufficient.
Productivity 24gb hands down.

You are going to derive zero benefit from huge frame buffer for just playing games. Games are not even coded to make use of barely 11gb of 80tis right now. 24gb is truly a production level amount.

My Davinci Resolve work will massively benefit while your world of warcraft will not.

Anyways I strongly suspect these cards are not going to have 24gb. Even if earlier I supported such a fallible claim.
 
I don’t think that analogy applies here since consoles determine how the next generation of games will look and play. The 3090 might run them at higher resolution and framerate but will it be enough to warrant such a huge cost disparity? Most likely not for a lot of enthusiasts, especially in a dying economy.

It does apply, because he is basically saying:

Because the absolute top end discrete GPU is too expensive, it's time to give up on ALL discrete GPUs.

So I said:

Because the absolute top end Car is too expensive, it's time to give up on ALL Cars.

It's the exact same "logic".
 
I'm going to need to see pricing and some #'s. I upgrade my video card every 2 years, so I'm going to need to know that 24GB is something I'm going to need soon. Future proofing with a GPU isn't a thing I'm concerned with - I want something with immediate impact.
 
It does apply, because he is basically saying:

Because the absolute top end discrete GPU is too expensive, it's time to give up on ALL discrete GPUs.

So I said:

Because the absolute top end Car is too expensive, it's time to give up on ALL Cars.

It's the exact same "logic".

Where did I say that?
 
Seeing as my 1080 Ti has 11GB. 12GB wouldn't be much of an upgrade. So it looks like 24GB for me.

More processing power by the GPU the faster it can process stuff loaded in VRAM => less VRAM required. Obviously the GPU's processing capabilities vs amount VRAM required to not get starved goes hand in hand and why both increase over time but I also don't see the point of making graphics cards expensier than they need to be. For 8K the VRAM requirements would be higher but the GPU aren't fast enough either way to process that right now, we're just about getting newer games in 4K playable in higher than 60 FPS. Having said that 12GB isn't that much for a card of its power but for no greater than 4K it should be fine for the latest games for 2-3 or so years. I think Nvidia is scared of Big Navi and tries to avoid making too costly cards this time around as AMD might have a winner on its hand this time around.
 
More processing power by the GPU the faster it can process stuff loaded in VRAM => less VRAM required. Obviously the GPU's processing capabilities vs amount VRAM required to not get starved goes hand in hand and why both increase over time but I also don't see the point of making graphics cards expensier than they need to be. For 8K the VRAM requirements would be higher but the GPU aren't fast enough either way to process that right now, we're just about getting newer games in 4K playable in higher than 60 FPS. Having said that 12GB isn't that much for a card of its power but for no greater than 4K it should be fine for the latest games for 2-3 or so years. I think Nvidia is scared of Big Navi and tries to avoid making too costly cards this time around as AMD might have a winner on its hand this time around.

Honestly there's no real reason behind it except numbers and e-peen.
 
Back
Top