How many 4080/4090 owners plan to upgrade to a respective 50 series when available?

Yeah I did, meant nothing to me when I bought the 6900XT, I also don't plan to upgrade again as it plows through anything I play at 1440p. I wanted a raster powerhouse and it fit the bill and was cheaper and easier to find then the Nvidia 3090. Pretty much 100 fps or better at maxed out settings in everything I play, which goes good with my 144Hz monitor. Ray tracing just is not there yet as a feature that makes it a need to have, maybe in 4 or 5 years it will be there in my opinion.

It's must have in most any game that has implemented it, and that number is growing rapidly. You're leaving heaps of image quality on the table not using 4k raytracing. 1440p was 2008's high res (2560x1600). It hasn't been in a long time now. You're running a 3080 tier (not 3090) card with poor raytracing and no DLSS, and paid $1k? Wow. I paid $779 plus tax for my 3080 in Dec 2020 the height of the crypto craze even! EDIT: It's a hybrid evga ftw3 ultra at that. Raytracing is the new "ultra" setting.
 
Last edited:
It's must have in most any game that has implemented it, and that number is growing rapidly. You're leaving heaps of image quality on the table not using 4k raytracing. 1440p was 2008's high res (2560x1600). It hasn't been in a long time now. You're running a 3080 tier (not 3090) card with poor raytracing and no DLSS, and paid $1k? Wow. I paid $779 plus tax for my 3080 in Dec 2020 the height of the crypto craze even!

Cool story, literally don't care. Plays my games well and that is all that matters to me, I also see no benefit to 4K and I tried ray tracing in a couple of games, just was not that impressive at all, strained on Hogwarts to notice a difference. Just don't have room on my desk for a screen bigger then 35" and at that size 4K really didn't look any different to me then 1440p, unlike on my 65" tv in my living room.

Also you need to learn to be happy with what you purchase, what is great for you is not always great to someone else as it had under 16 gigs of memory which is why I would not buy a 3080.
 
Last edited:
If you're spending $1,000+ on a video card these days then you can't ignore ray tracing.

I hugely agree if we're talking about today but back then RT still wasn't a huge selling point IMO. I even bought a 6900XT myself because I didn't care about RT. Today of course I absolutely do which is another reason I'm skipping AMD.
 
It's must have in most any game that has implemented it, and that number is growing rapidly. You're leaving heaps of image quality on the table not using 4k raytracing. 1440p was 2008's high res (2560x1600). It hasn't been in a long time now. You're running a 3080 tier (not 3090) card with poor raytracing and no DLSS, and paid $1k? Wow. I paid $779 plus tax for my 3080 in Dec 2020 the height of the crypto craze even! EDIT: It's a hybrid evga ftw3 ultra at that. Raytracing is the new "ultra" setting.

Anything less than a 4090, at this point, is not a 4K ray tracing card. Even the 4090 takes a massive hit, but it's typically playable (I'm looking at you, Cyberpunk). DLSS or FSR help immensely, but those features aren't free. You'll take a latency hit and it remains an approximation. An impressive approximation, but an approximation nonetheless.

I have to agree with Gideon and say we're still a generation or two out before we'll see a good solution for it. You're free to disagree, but the only card that can hold 4K 60 fps consistently right now is the 4090 (except for Cyberpunk, because Cyberpunk). The 4080 is a $1200 card and I don't consider it a 4K ray tracing card.
 
Don't have a crystal ball so upgrading is more a wait and see what options come available. Who knows maybe Intel will have something that just blows away both AMD/Nvidia, while I doubt it, it is possible. UE5 games and all the eye candy that goes along with it plus how AI plays out in games may be the real decider on upgrade path. Plus if we are not also in a super depression or some other catastrophic event plaguing the World.
 
Anything less than a 4090, at this point, is not a 4K ray tracing card. Even the 4090 takes a massive hit, but it's typically playable (I'm looking at you, Cyberpunk). DLSS or FSR help immensely, but those features aren't free. You'll take a latency hit and it remains an approximation. An impressive approximation, but an approximation nonetheless.

I have to agree with Gideon and say we're still a generation or two out before we'll see a good solution for it. You're free to disagree, but the only card that can hold 4K 60 fps consistently right now is the 4090 (except for Cyberpunk, because Cyberpunk). The 4080 is a $1200 card and I don't consider it a 4K ray tracing card.
More like a RT assist card. Better lighting techniques being developed from the ground up vice add on's, something like how Unreal Engine 5.x games, seems to be the path forward. Having RT only decent on a current generation super card I see is never the right answer. RT only working well for one brand and generation is DOA.
 
More like a RT assist card. Better lighting techniques being developed from the ground up vice add on's, something like how Unreal Engine 5.x games, seems to be the path forward. Having RT only decent on a current generation super card I see is never the right answer. RT only working well for one brand and generation is DOA.

Honestly that seems to be what Nvidia is sorta doing here. That new pathtracing feature found in Cyberpunk 2077 is really only playable on either a 4080 or 4090, and it relies on FG if you want to get high fps. I wouldn't be surprised if say when the RTX 50 series comes out, they push out some "Path Tracing 2.0" update for Cyberpunk 2077 that will only be playable on the RTX 5080 or 5090.
 
More like a RT assist card. Better lighting techniques being developed from the ground up vice add on's, something like how Unreal Engine 5.x games, seems to be the path forward. Having RT only decent on a current generation super card I see is never the right answer. RT only working well for one brand and generation is DOA.

Lumen looks impressive and is far less computationally expensive than RT. RT is technically “better”, but will it be enough for developers to want to put it in when they can just use Lumen and get close enough at a fraction of the computational cost? In some cases, I’m sure, especially studios working closely with Nvidia, but in general I imagine many will question if it’s worth it.
 
Honestly that seems to be what Nvidia is sorta doing here. That new pathtracing feature found in Cyberpunk 2077 is really only playable on either a 4080 or 4090, and it relies on FG if you want to get high fps. I wouldn't be surprised if say when the RTX 50 series comes out, they push out some "Path Tracing 2.0" update for Cyberpunk 2077 that will only be playable on the RTX 5080 or 5090.

CDPR is really trying to push the graphical envelope with that game, and I love them for it. Really great way to see what the future of gaming can look like.
 
CDPR is really trying to push the graphical envelope with that game, and I love them for it. Really great way to see what the future of gaming can look like.

While I do like it too, I also don't think pushing out new RT modes that only the very top end of the latest generation of GPU can actually render at playable frame rates is the best way to get people to care about RT. Instead, something like Metro Exodus Enhanced which brought RTGI to the masses by working even on console, or Unreal Engine's Lumen tech which again works on a much greater variety of hardware including consoles would be the better approach to get people to start caring about RT.
 
While I do like it too, I also don't think pushing out new RT modes that only the very top end of the latest generation of GPU can actually render at playable frame rates is the best way to get people to care about RT. Instead, something like Metro Exodus Enhanced which brought RTGI to the masses by working even on console, or Unreal Engine's Lumen tech which again works on a much greater variety of hardware including consoles would be the better approach to get people to start caring about RT.
i don’t follow that logic. I could understand if you’re forced to use it but it’s an option to can enable or disabe
 
i don’t follow that logic. I could understand if you’re forced to use it but it’s an option to can enable or disabe

I'm saying that instead of just pushing for new RT tech that only works well on the very latest and greatest GPUs, Nvidia should also put some focus on making RT advancements to make it run better with better quality on lower end hardware/consoles too. Although they did just release Ray Reconstruction which works on the oldest RTX cards so that's something. The more people that can actually experience RT, the better.
 
That new pathtracing feature found in Cyberpunk 2077 is really only playable on either a 4080 or 4090,
a 4070ti will do 60fps with dlss quality on at 1440p and 105 with FG on, the 4070 will do around 70-75 fps with frame gen which is probably too low.

At 1080p I can see the 4060 maybe pulling it off with DLSS 3.

After all the Lumen game on console talked about run at 720p

Lumen looks impressive and is far less computationally expensive than RT.
Is it ? or computationally different ? The drop from lumen on or off on Fornite looked a lot like RT on off on cyberpunk or control and the latest Lumen game just launched need to run at 720p on a PS5 and seem to need a 7900xtx to do 1440p-60 correctly, performance of Cyberpunk RT ultra at 1080p is not that dissimilar to Immortals of Aveum 1080p
 
Last edited:
While I do like it too, I also don't think pushing out new RT modes that only the very top end of the latest generation of GPU can actually render at playable frame rates is the best way to get people to care about RT. Instead, something like Metro Exodus Enhanced which brought RTGI to the masses by working even on console, or Unreal Engine's Lumen tech which again works on a much greater variety of hardware including consoles would be the better approach to get people to start caring about RT.

Agreed, particularly on Lumen, but it’s still cool to see what’s possible eventually.
 
Is it ? or computationally different ? The drop from lumen on or off on Fornite looked a lot like RT on off on cyberpunk or control and the latest Lumen game just launched need to run at 720p on a PS5 and seem to need a 7900xtx to do 1440p-60 correctly, performance of Cyberpunk RT ultra at 1080p is not that dissimilar to Immortals of Aveum 1080p

I’m not a technology expert, so I would need to differ to a software developer who’s more familiar with the nuts and bolts, but based on everything I’ve read about how Lumen works vs RT yes, it is computationally less expensive which is one of the advantages, but the trade off being made is that RT would be closer to “reality”. Being closer to reality takes a heck of a lot more math to calculate though, so Lumen takes a different approach that looks close enough in an effort to deliver a similar visual at a smaller cost. Lumen performs fewer calculations to drive the image.

Your comparison between Cyberpunk and Immortals of Aveum may have other factors at play. I don’t know. All I know is that Lumen is supposed to perform fewer calculations than RT to generate the image and should therefore result in a higher frame rate all else considered equal.
 
but based on everything I’ve read about how Lumen works vs RT yes, it is computationally less expensive which is one of the advantages
It seem really hard to find benchmark of games with Lumen on vs off, but it seem hard to tell if it is by far easier compute wise than say Metro Exodus RT solution or one use some matrix multiplication with specific hardware for it and the other something else.
 
Why? 8K display?? (while that's laughable, realize that Nvidia's long term plan is to not support anything but radical upscaling).
Remember the refresh rate benefits curve is geometric.

That's why NVIDIA is likely going to go to bigger ratios (eventually 8x-10x ratios) for future verisons of reprojection-assisted DLSS.

So more brute power will be put to the temporal dimension. 75% of a GPU could do the 100fps path traced, and 25% of the GPU can be used to multiply frame rates by 10x.

120Hz-vs-1000Hz is a bigger human visible difference than 60Hz-vs-120Hz. It's visible to >90% of human population (including your grandma, etc) in blind test -- it's the VHS-vs-8K rather than 720p-vs-1080p, but in the temporal diminishing. It's a mandatory large jump needed for the diminishing curve of returns. Retina refresh rates isn't until the quintuple digits for the most extreme scientific variables -- but it goes dramatically geometric. Forget the 240Hz-vs-360Hz worthless refresh rate incrementalism.

Strobeless ULMB (aka blurless sample and hold) requires extremely high frame rates approaching 1000fps 1000Hz

It's the only way to go simultaneously blur-free and PWM-free.

Check the purple Research tab on Blur Busters. People once laughed at 4K and retina displays, too.

What he said, is that without using AI and other techniques, we've hit a wall on rasterization performance. You can watch the videos for yourself.
That part I agree. Yes. 👏

However, it is possible to get 1000fps 1000Hz on a pair of RTX 4090s, with reprojection.

The problem is we need some API mods, engine mods, and software developer technique, to do it properly.

Check the new "lagless frame generation" article recently published on Blur Busters cover page.

LaglessFrameGenerationViaReprojection[1].png


1000fps with path tracing, RTX ON, etc.

No reprojection double images either due to a trick (reprojection to BLURLESS SAMPLE AND HOLD; win win). And starting frame rate should be 100fps, to keep all temporal artifacts (wiggles, stutters, quivers, edge parallaxing) above flicker fusion threshold. The magical thing is that if you use a high starting frame rate (100fps), the reprojection demands become a bit simpler (less optical flow needed), because optical flow is to fix a lot of low-framerate problems, and optical flow is very compute heavy. So, simpler reprojection algorithms can be applied for 1ms frametime intervals with far less artifacts, if your starting frame rate is 100fps.

Want the long version? See the "Developer Best Practices" chapter in the new long article. I got only accolades by AMD & NVIDIA employees that I showed the article to.
 
Last edited:
Firm no because my systems are perfectly "balanced" right now. Changing even a single component with throw it's synergy off. Not worth it.
 
Firm no because my systems are perfectly "balanced" right now. Changing even a single component with throw it's synergy off. Not worth it.
If no GPU Stronger than the 5090 release until say 2028, do you think you will be able to go that long without updating your CPU-GPU ?
 
If no GPU Stronger than the 5090 release until say 2028, do you think you will be able to go that long without updating your CPU-GPU ?
Yea absolutely. I have 2 rigs main and backup. 13900ks 4090 on water main rig and 12700k 4090 on air backup rig both super nice rigs there is no possible way anything is changing on either rig not a 5 series GPU and not a 14th series CPU no way 😂 we're holding down the fort boys!!!
 
I somewhat not that confident that someone that even thought let alone did update a 12700k to a 13900k and bought 2x4090 will not buy again in 2025 if Lunar Lake deliver and the 5090 is 70% stronger and most new game cannot run stready at 40fps at 4k on a 4090
 
It seem really hard to find benchmark of games with Lumen on vs off, but it seem hard to tell if it is by far easier compute wise than say Metro Exodus RT solution or one use some matrix multiplication with specific hardware for it and the other something else.

I'm sure we'll see more in the near future as UE5 increases in adoption, but based on the design principles of both Lumen and RT, I would say it would be basically impossible in a like for like comparison for Lumen to consume more computational power compared to RT based on what I've read. The caveat to that is the RT picture looks "better", which makes sense, because more elements are being considered and calculated. The question you have to ask yourself is the same one we've been asking all along; is it worth the framerate hit for the marginal gain in visual quality. I would say yes in a single player game provided you can still maintain 60 fps, and almost always no in a multiplayer game since frames are more critical to winning, but that's just my opinion. Also, the results created by Lumen and RT are fairly close, and would you notice the differences as you're running around in a faster paced game concentrating on completing objectives? RT looks phenomenal in screenshots, but screenshots are static. Is it enough of a visual difference between the two to care, particularly given the use case?

In any case, it also might be more difficult to find a fair comparison since I would imagine that studios working with Lumen likely wouldn't bother with RT, but again, I don't know. I personally don't see why a studio already using Lumen would bother to also try to implement RT, but I'm sure there are other factors they would consider when making that call.
 
The price argument now? Really? Ok then the 6950 XT costs nearly HALF as much as the 7900 XTX but provides about 80% the performance so I would hope the 7900 XTX is a better product. Price means nothing if you don't get the performance that you are targeting. You could sell me a GPU that's 1/3 the performance of the 4090 while only costing 10% the price and I still wouldn't buy it.
The 7900 XTX is the 'next gen' though - so, there's a performance lift and hopefully other benefits - AMD seems to not make strides on power efficiency though - the 6900/6950 XT seems to be power hungry too - for that gen. My evaluation and research of RDNA 2 cards makes me conclude they're 'bad buys' - good for gaming but not competitive for ML, Compute, video editing - compared to nvidia's similar gen (adjacent to that release). However, the 2nd hand market is pretty good for those cards - but, at release prices - I didn't think they're good buys.

In Blender and video editing (and afaik - in ML/AI), they're so-so/mediocre. Plus, when you factor in the transient spikes....I'd rather have a 3080 Ti or 3090.
 
If you're spending $1,000+ on a video card these days then you can't ignore ray tracing.
Is the 7900 XTX good at ray tracing? I am finding some used 7900 XTX cards on the market here for $1000+ CAD (that's about $800 USD, so a good deal if the card is in good condition - i.e. no coil whine, no temp issues). I was mostly considering nvidia cards but I'm definitely looking at this series - this gen - only AMD cards that are 7900 XTX. I read that some ppl say the ray tracing is pretty good - not as good as Nvidia - but, definitely better than previous iterations - or previous performance.
 
Is the 7900 XTX good at ray tracing? I am finding some used 7900 XTX cards on the market here for $1000+ CAD (that's about $800 USD, so a good deal if the card is in good condition - i.e. no coil whine, no temp issues). I was mostly considering nvidia cards but I'm definitely looking at this series - this gen - only AMD cards that are 7900 XTX. I read that some ppl say the ray tracing is pretty good - not as good as Nvidia - but, definitely better than previous iterations - or previous performance.
It's certainly ok, but probably this deserves its own thread, you know?
 
Is the 7900 XTX good at ray tracing? I am finding some used 7900 XTX cards on the market here for $1000+ CAD (that's about $800 USD, so a good deal if the card is in good condition - i.e. no coil whine, no temp issues). I was mostly considering nvidia cards but I'm definitely looking at this series - this gen - only AMD cards that are 7900 XTX. I read that some ppl say the ray tracing is pretty good - not as good as Nvidia - but, definitely better than previous iterations - or previous performance.
It's about 3080 to 3090 speed on raytracing.
 
Is the 7900 XTX good at ray tracing? I am finding some used 7900 XTX cards on the market here for $1000+ CAD (that's about $800 USD, so a good deal if the card is in good condition - i.e. no coil whine, no temp issues). I was mostly considering nvidia cards but I'm definitely looking at this series - this gen - only AMD cards that are 7900 XTX. I read that some ppl say the ray tracing is pretty good - not as good as Nvidia - but, definitely better than previous iterations - or previous performance.

It’s around RTX 3090 to RTX 3090 Ti performance, so certainly no slouch, but a generation behind Nvidia. That’s for RT titles though. In Lumen, they’re right in line with Nvidia last I checked.

The follow up question is whether or not that matters to you. If you play a lot of single player games using RT, then AMD is not your card. If you play multiplayer games, then you probably don’t care.
 
It's certainly ok, but probably this deserves its own thread, you know?
Probably. :) Well, I started a thread in that I'm considering a 3090, 4080 or 7900 XTX for my next gpu (upgrade). I am interested in ray tracing - but especially for the other uses (besides gaming) - but, it's interesting for the gaming aspects, too. :)

Feel free to comment about it there?

It’s around RTX 3090 to RTX 3090 Ti performance, so certainly no slouch, but a generation behind Nvidia. That’s for RT titles though. In Lumen, they’re right in line with Nvidia last I checked.

The follow up question is whether or not that matters to you. If you play a lot of single player games using RT, then AMD is not your card. If you play multiplayer games, then you probably don’t care.
Yeah, that's not bad. I've read a few comments/attitudes towards ray tracing - it's 'overrated' - it will be a while until graphics cards are useful for ray tracing etc. - I think a 3090/3090 Ti performance for a 7900 XTX (in ray tracing) is acceptable. Especially, considering it's an AMD card.

I prefer multi-player games - although, to be honest, I only recently got into gaming and I don't do it that often - at least, now - since I sold my 3080. I didn't anticipate taking this long to (save $) get a another card. I also am good at being indecisive. LOL!

I've gone off topic too much, it seems? :) The good part about if ppl move to the next gen and sell their 4080s and 4090s - to go 50 series - means more Ava cards on the market - hopefully, some good 2nd hand prices since that's what I'm looking/hoping for.

P.S. There are some games I notice from youtuber benchmarks - that look really cool (I'd like to get/try) - and I do check the ray tracing (difference/penalty).
 
I mean... It 100% depends on the price, the performance, how available it is, and so on. There really isn't a reason to "plan" on an upgrade if you have the current gen stuff as you don't know what the next gen will bring. It could be amazing, in which case I'll probably get it because I spend too much money on computer toys. However it could be a "meh" kind of upgrade, in which case I'll probably give it a miss.

Another unknown is size/power draw. The 4090 is really pushing it. I don't think I can go much bigger or more power hungry. If the 5090 is double the speed, but I have to install a dedicated 20A circuit and buy an as of yet unreleased 2000W PSU to run it, nah man, I'm good.

Since all the 5000 series stuff is pure speculation right now, I see no reason to make any plans.
 
  • Like
Reactions: noko
like this
I mean... It 100% depends on the price, the performance, how available it is, and so on. There really isn't a reason to "plan" on an upgrade if you have the current gen stuff as you don't know what the next gen will bring. It could be amazing, in which case I'll probably get it because I spend too much money on computer toys. However it could be a "meh" kind of upgrade, in which case I'll probably give it a miss.

Another unknown is size/power draw. The 4090 is really pushing it. I don't think I can go much bigger or more power hungry. If the 5090 is double the speed, but I have to install a dedicated 20A circuit and buy an as of yet unreleased 2000W PSU to run it, nah man, I'm good.

Since all the 5000 series stuff is pure speculation right now, I see no reason to make any plans.
My 4090 on average draws about the same power as my 3080 did. They aren’t that power hungryl. At least not when you compare them to a 3090Ti
 
Last edited:
My 4090 on average draws about the same power as my 3080 did. They aren’t that power hungryl. At least not when you compare them to a 3090Ti
It's not a matter of average, but peak. Systems must be engineered to be able to deal with peak demand. The PSU, the power circuit, all have to be able to handle the max demand. In the case of the 4090, that's 450 watts on my non-overclocked unit. Add in all the other system components and it is fine, a 1000W PSU is more than enough, and 750W would probably do the trick. However 1000W PSUs are already pushing over half of what a NEMA 5-15P can deliver. I don't know about you, but the circuit for my computer room shares with other stuff as well. I don't want to push much past this in terms of power draw, I don't want to have to wire up something dedicated.

Likewise there's just the physical size. I have a bigass case and the 4090 still dominates it. It's so big I can't use the second PCIe slot for something like a 10gig NIC or soundcard or anything. It is obnoxiously large and I don't feel like it can get much larger and still be accommodated in any kind of case I'm willing to have. The only other option is then to increase fan speed, that's what servers do, and I really don't care for that.

I'm just getting at my limit of how big and hungry I'm willing to buy.
 
At least in term of power spike, the 4090 seem to behave better than Ampere (and some rdna 2 model), which was another one issue, you had peak power but small spike that could turn computer off has well.

5wTpTV9WTdU7ZlZe.jpg


I could see them not going any bigger (power or card size), their cooling get better, power delivery get better, Lovelace was overbuilt (maybe they wanted to keep a 500-600 watt option in mind and the node advantage maybe caught them a bit by surprise just how good it was) making that easy, it is already giant and an issue for many cases.

Specially if the rdna4 will not compete with the 80-90s class story stay true, why push a N3 card that require more power-cooling size than what the current 4090. Lovelace goes in diminishing return past the 350 watt or so bar quite a bit has well. And I imagine they could feel no way we have two generation of cards starting small fire in a row.
 
Last edited:
It's not a matter of average, but peak. Systems must be engineered to be able to deal with peak demand. The PSU, the power circuit, all have to be able to handle the max demand. In the case of the 4090, that's 450 watts on my non-overclocked unit. Add in all the other system components and it is fine, a 1000W PSU is more than enough, and 750W would probably do the trick. However 1000W PSUs are already pushing over half of what a NEMA 5-15P can deliver. I don't know about you, but the circuit for my computer room shares with other stuff as well. I don't want to push much past this in terms of power draw, I don't want to have to wire up something dedicated.

Likewise there's just the physical size. I have a bigass case and the 4090 still dominates it. It's so big I can't use the second PCIe slot for something like a 10gig NIC or soundcard or anything. It is obnoxiously large and I don't feel like it can get much larger and still be accommodated in any kind of case I'm willing to have. The only other option is then to increase fan speed, that's what servers do, and I really don't care for that.

I'm just getting at my limit of how big and hungry I'm willing to buy.
While I understand your concern, I don't think it's going to be as big of an issue as you think. I'm actually surprised no one has made a more compact 4090 given how cool they actually run. It seems AIBs were expecting 3090Ti level of power and heat which just didn't happen. I have a MSI Gaming X 4090 which is one of the hotter running ones and I'm typically under 70C with default fan curve. I'm not really worried about my circuit. A 1000watt PSU puts the same load on it as a 750watt or a 1500watt provided they are powering the same components and have similar efficiency ratings.
 
While I understand your concern, I don't think it's going to be as big of an issue as you think. I'm actually surprised no one has made a more compact 4090 given how cool they actually run. It seems AIBs were expecting 3090Ti level of power and heat which just didn't happen. I have a MSI Gaming X 4090 which is one of the hotter running ones and I'm typically under 70C with default fan curve. I'm not really worried about my circuit. A 1000watt PSU puts the same load on it as a 750watt or a 1500watt provided they are powering the same components and have similar efficiency ratings.
I know that a PSU of a given size draws (about) the same power at a given load, what I'm talking about is if they increase the load on the 5090, I don't know if I'll be interested because of issues relating to it. Like let's say they go ham and go to 800W. Ok well then I'll need a new PSU, because my CPU can draw 250W by itself (13900K no OC) and we need some overhead for other components so probably somewhere in the 1200-1300W range. That's doable, though expensive since I might want overhead on the PSU as well. But now we look at the other side. 1300W DC is going to be about 1450W AC assuming an 80 Plus Titanium supply meeting its rating at full load. That's over 12 amps, assuming no voltage sag and over 13 if it sags to 110 volt. That leaves an uncomfortably small amount of power left over on that 15 amp plug, particularly since I have other things on the same circuit in the room.

So really, I need to either upgrade the room to 20A plugs or run a dedicated line at that point. Both costly, neither interest me. Could I run it without doing so? Ya probably, my system is probably not going to hit max draw. But that's not a good idea for the same reason running an undersized PSU isn't: You spec for peak load, not average.

Now will that happen? No idea, my guess is not, but my point is just that if they keep pushing the high end by pushing the power budget, at some point soon I'm going to tap out and not be interested. I'll wait longer and buy lower end.

The flipside being if they don't push the power budget, it might turn out that the improvement just isn't big enough for me to want one, and I'll wait a generation. Like if the 5090 comes out and is 25% faster than a 4090 that's not nothing, but not enough for me to care to buy one.

My long-winded point being there's just too much unknown to say "Yep, I'll buy one!" right now.
 
I don't own a 4xxx series card, but if I did it would be ASAP. Nvidia has a terrible history with the number 4.
 
It's not a matter of average, but peak. Systems must be engineered to be able to deal with peak demand. The PSU, the power circuit, all have to be able to handle the max demand. In the case of the 4090, that's 450 watts on my non-overclocked unit. Add in all the other system components and it is fine, a 1000W PSU is more than enough, and 750W would probably do the trick. However 1000W PSUs are already pushing over half of what a NEMA 5-15P can deliver. I don't know about you, but the circuit for my computer room shares with other stuff as well. I don't want to push much past this in terms of power draw, I don't want to have to wire up something dedicated.

Likewise there's just the physical size. I have a bigass case and the 4090 still dominates it. It's so big I can't use the second PCIe slot for something like a 10gig NIC or soundcard or anything. It is obnoxiously large and I don't feel like it can get much larger and still be accommodated in any kind of case I'm willing to have. The only other option is then to increase fan speed, that's what servers do, and I really don't care for that.

I'm just getting at my limit of how big and hungry I'm willing to buy.

A single 4090 and probably 5090 is nothing in terms of space taken up and power draw compared to 10+ years ago when people were running Quad GTX 480s lol.

1693703500986.png
 
  • Like
Reactions: noko
like this
Back
Top