Nvidia pulls in some $18.1B up from $5.93B this time last year...

You forget they skipped Volta consumer cards I guess. Volta was no faster then the previous gen in terms of raster.
They didn't skip Volta for consumer, it was never meant to be for consumers, just like hopper isn't. Really now? THAT'S your example? You really think they won't improve raster for two full gens? That's absurd.
 
Last edited:
That is true assuming Jensen is willing to sell that silicon into the low profit gaming sector.
The thing is if blackwell really is going to be providing 2-2.6x the AI performance.... there is no way in hell Jensen is dedicating even 3/4 functional parts to gaming. If those gains are even half true a 3/4 functional blackwell die is going to outperform their current $20k AI accelerators.
I have no doubt at this point Nvidia will ship 4090/80 super cards and that will be the only high end stuff they ship until a blackwell refresh. I mean who could even blame them at this point... its not like AMD or Intel is rumored to have some uber card that is going to release in that time and stomp the 4090.
Well, Enterprise gets Blackwell in 2024 with the big GH100, the rest of us need to wait until 2025, before we see it so no RTX 5000 BK, 6000 BK, or 5080/5090 for a full year after the big boys get their toys, so it will be old hat for them by then.
The 2-2.6x performance increase is for the GH100 over the H100, there is nothing out there at all about the BK102 or any other smaller desktop/workstation silicon yet.
Honestly, when it comes down to the consumer parts I expect them to go smaller, Tensor Cores only make up at most 25% of any of the consumer silicon, I would expect Nvidia instead choose to drop that down a little and increase raster performance so we are looking at something like a 20-25% "generational" increase.
I air quote Generational because we are going to get 2 Generations of Ada Lovelace so timeline-wise, that would only be a 10-12% annual increase which is about on par with what we get now really.

I don't look forward to the price tags though but that is when I will be rebuilding my system so I will likely just need to suck it up, hoping AMD brings something to the table that makes me wildly wrong, but I am not holding my breath on it.
Fortunately, the 1440p monitor is still working great, so as long as I can manage a noticeable performance increase at that resolution I won't be disappointed.
 
They didn't skip Volta for consumer, it was never meant to be for consumers, just like hopper isn't. Really now? THAT'S your example? You really think they won't improve raster for two full gens? That's absurd.
I don't know if you got the message from the CEO. Nvidia is an AI company now. How does raster benefit them?
I'm not being facetious really.... the truth is Raster is no longer Nvidias mission statement. I mean have you not been paying attention to Nvidia for the past few years. The 4060 isn't faster then a 3060... its slower. Its true across the board not counting the top of the stack. Nvidia hasn't wanted to dedicate more of the Fab allotment then felt they needed to supply gamers. The 4090 is an anomoly. I would say unless something changes Nvidia isn't going to be dedicating that size of die to a gaming card again. The upcomming 4090 Super will be the last large gaming chip Nvidia will be willing to part with.

Hey I could be wrong... I would love to be wrong and see some uber Nvidia card in 2025. I just really don't see the AI boom slowing down. As I said responding to Duke AI isn't the Crypto boom. Actual things are being produced, I really don't see how in the next 3-4 years Nvidias position changes at all. They are going to be pressed on fab space at every turn, they are going to struggle to keep up with demand... and the internal pressures we can't see are going to exist as well. Jensen is going to even further gut the Nvidia gaming dept... that talent is needed on the AI front hardware and software side. They will keep a toe in with super cards... and maybe a blackwell refresh mid range tier of cards. Thing is IF (big if) AMD and or Intel where to manage a gaming product that pushes Nvidia. Jensen will just say nope no more. Or maybe worse... he releases something like a 5090 TI using the lowest end of the current AI silicon, and puts a price tag on it so painful even the most die hard Nvidia fans will tap.
 
Honestly, when it comes down to the consumer parts I expect them to go smaller, Tensor Cores only make up at most 25% of any of the consumer silicon, I would expect Nvidia instead choose to drop that down a little and increase raster performance so we are looking at something like a 20-25% "generational" increase.
at most 25% make it sound way more than it was from most that have I raid, the Turing 1660 super was RT-Turing core free and despite having about half the silicon used for cache the core count by mm2 does not match 25% of the 2060 being used for both RT and Tensor, it was probably more around 4 to 7% for RT cores, same for the Tensors.

Some high resolution of chips image amateur analysis talk of a 2080TI fitting in a 684mm die instead of a 754 mm² area for example: https://www.techpowerup.com/254452/...ses-tpc-area-by-22-compared-to-non-rtx-turing, for a big 10% but not in the 25% or even 33% that can be said around looking at artistic drawing of the architecture, maybe it indirectly augment cache demand too.

Could you imagine Nvidia advance over AMD if it achieved to keep up with them on samsung 8 instead of TSMC 7, with similar sized card but 25% of them used for Tensors core (and say 7-8% for RT core). 66% of a 3070 would be 260 mm of Samsung 8 trying to keep up with 335mm of TSMC 7 Navi 22.

66% of a 4060 you are down to 106mm trying to compete with 204 mm of Navi 33.
 
at most 25% make it sound way more than it was from most that have I raid, the Turing 1660 super was RT-Turing core free and despite having about half the silicon used for cache the core count by mm2 does not match 25% of the 2060 being used for both RT and Tensor, it was probably more around 4 to 7% for RT cores, same for the Tensors.

Some high resolution of chips image amateur analysis talk of a 2080TI fitting in a 684mm die instead of a 754 mm² area for example: https://www.techpowerup.com/254452/...ses-tpc-area-by-22-compared-to-non-rtx-turing, for a big 10% but not in the 25% or even 33% that can be said around looking at artistic drawing of the architecture, maybe it indirectly augment cache demand too.

Could you imagine Nvidia advance over AMD if it achieved to keep up with them on samsung 8 instead of TSMC 7, with similar sized card but 25% of them used for Tensors core (and say 7-8% for RT core). 66% of a 3070 would be 260 mm of Samsung 8 trying to keep up with 335mm of TSMC 7 Navi 22.

66% of a 4060 you are down to 106mm trying to compete with 204 mm of Navi 33.
Well, some are fewer, but on the AD102 it breaks down to some 24% of the silicon being dedicated to the Ray Tracing, AI, blah blah blah, going down to the AD106 where it is closer to 16%, the CUDA logic takes up the bulk of both as that is identical across the whole range which that takes up a set footprint within each of the SM's and then some 20-25% of the physical area doing the raster work. But on any given part of the AD series less than 50% of the chip itself is processing the actual graphics be it raster, ray-traced, or generated, the majority of the silicon is taken up by IO, memory, or logic.

Separating the IO, Memory, and Logic was one of the goals of RDNA3 because it doesn't get anything from the node decreases, Cache produced smaller than 6nm doesn't scale well, logic doesn't scale much past 12nm or so, and some of the best memory controllers on the market are still done on 16nm.
Makes me hopeful for their RDNA 4 or 5 parts when they fix some of their interconnect and latency issues, but those are better attributed to TSMC packaging problems than AMD design problems, TSMC is doing great with their nodes, but their packaging is lagging behind and what they promised on paper isn't panning out to the real world. They have invested a lot into new facilities for that but again those won't get here until 2025 or later.

I generally look forward to what is to come for 2026 and beyond because by then the packaging should be available that will make a lot of the MCM and Chiplets we have now actually work as intended.
 
I don't know if you got the message from the CEO. Nvidia is an AI company now. How does raster benefit them?
I'm not being facetious really.... the truth is Raster is no longer Nvidias mission statement. I mean have you not been paying attention to Nvidia for the past few years. The 4060 isn't faster then a 3060... its slower. Its true across the board not counting the top of the stack. Nvidia hasn't wanted to dedicate more of the Fab allotment then felt they needed to supply gamers. The 4090 is an anomoly. I would say unless something changes Nvidia isn't going to be dedicating that size of die to a gaming card again. The upcomming 4090 Super will be the last large gaming chip Nvidia will be willing to part with.

Hey I could be wrong... I would love to be wrong and see some uber Nvidia card in 2025. I just really don't see the AI boom slowing down. As I said responding to Duke AI isn't the Crypto boom. Actual things are being produced, I really don't see how in the next 3-4 years Nvidias position changes at all. They are going to be pressed on fab space at every turn, they are going to struggle to keep up with demand... and the internal pressures we can't see are going to exist as well. Jensen is going to even further gut the Nvidia gaming dept... that talent is needed on the AI front hardware and software side. They will keep a toe in with super cards... and maybe a blackwell refresh mid range tier of cards. Thing is IF (big if) AMD and or Intel where to manage a gaming product that pushes Nvidia. Jensen will just say nope no more. Or maybe worse... he releases something like a 5090 TI using the lowest end of the current AI silicon, and puts a price tag on it so painful even the most die hard Nvidia fans will tap.
The other side of this though is games as a whole are doing less and less with raster natively, and advancements brought forward in DX12U and Vulkan have improved raster performance by some 20% when those functions are used. Mesh Shaders and their associated culling methods reduce a lot of unnecessary processing that never gets seen or even displayed, and advancements in texture codecs have brought forward much better scaling which reduces the need for a lot of background processing and in some cases that is possibly to be offloaded to dedicated silicon and no longer a part of the raster process.
We are just getting to a stage though where these features are commonplace as it's taken a solid 2 years longer than planned for those technologies to get built into the Engines powering our games. We are going to see more titles that skip out on bothering to clean up shadows and reflections and such for non-ray-traced methods, and many of the developer tool sets are now using some pretty fancy AI to optimize assets to a degree that just aren't economical for developers to bother with. For years Developers have had the choice to do a lot of background work that they just don't because they could tell us to get a faster GPU instead of spending 2 million on the manpower to do that work. Now they press a button, a $50K workstation does the work over a weekend and the art department gets to work with that output first thing Monday. Game development and its tools have had some huge changes in the works that COVID kicked in the teeth timeline-wise, and we are getting to see those soon. Raster is still super important, but its role is being gradually increased and new methods available to developers are getting to be deployed which reduces the workload on those pipes, but on the Ray Tracing side of things the demand is increasing more than not.
 
I generally look forward to what is to come for 2026 and beyond because by then the packaging should be available that will make a lot of the MCM and Chiplets we have now actually work as intended.
That is the potential hope for gamers IMO. Yes AMDs first chiplet attempt here had some issues. That is probably the solution going forward. The AI stuff still needs a small number of standard raster style compute cores, a solution where gaming cards could simply stack a couple of them is really the best hope of not having hardware stagnate, or jump in price so high it makes no sense.
 
The 4090 is an anomoly. I would say unless something changes Nvidia isn't going to be dedicating that size of die to a gaming card again. The upcomming 4090 Super will be the last large gaming chip Nvidia will be willing to part with.

Hey I could be wrong...
You are probably not wrong as apparently TSMC 3nm limit die size quite a bit, we can expect grace hopper type a la M1ultra setup of pairing multiple GPU for the AI stuff and the 5090 to be smaller than the 4090 if it is monolithic.
 
You are probably not wrong as apparently TSMC 3nm limit die size quite a bit, we can expect grace hopper type a la M1ultra setup of pairing multiple GPU for the AI stuff and the 5090 to be smaller than the 4090 if it is monolithic.
Not quite, TSMC has a maximum reticle limit of 858mm2, but they have the ability to tile that out 2x3 for a 6x increase resulting in a max size of 5148mm2.
858mm2 has been the max individual size TSMC has been capable of for a while but they’ve gotten better at segmenting and joining this together seamlessly to go much bigger. That’s how Apple does theirs with the outfacing connectors so TSMC can mirror them in pairs during the lithography process.

I don’t expect a 5090, the China bans aren’t going away and the AIB’s aren’t going to pack up their facilities and leave either. Sadly there aren’t any facilities outside China that could handle that consumer load so it’s going to force Nvidia and AMD to change up their offerings.
So that means the largest we are reasonably looking at are something around the AD103.

What you were thinking of I think with the decrease of N3 wasn’t the tile size but the number of masking layers, originally 3N was proposed to have 25 layers but they have had to reduce it to 20 which brought us the numerous 3nm variants of N3, N3P, N3E, N3S, and N3X where they vary the layer count, SRAM cell type, gate, and metal pitches around for yield and application optimizations.
 
What you were thinking of I think with the decrease of N3 wasn’t the tile size but the number of masking layers,
I have very little idea of what they are and their implication so I doubt it, I think this is what I had in mind:

https://hardforum.com/threads/rtx-5xxx-rx-8xxx-speculation.2029825/post-1045720389
"Current i193 and EUV lithography steppers have a maximum field size of 26 mm by 33 mm or 858 mm². In future High-NA EUV lithography steppers the reticle limit will be halved to 26 mm by 16,5 mm or 429 mm² due to the use of an amorphous lens array."


Probably mixing up N3 with N2, but I swear that I did heard talk about blackwell N3 monolitic die size being an issue, but a lot of it was from MooreLaw I think, which was saying that he was really big 5090 plan in the next segment or video.
 
I have very little idea of what they are and their implication so I doubt it, I think this is what I had in mind:

https://hardforum.com/threads/rtx-5xxx-rx-8xxx-speculation.2029825/post-1045720389
"Current i193 and EUV lithography steppers have a maximum field size of 26 mm by 33 mm or 858 mm². In future High-NA EUV lithography steppers the reticle limit will be halved to 26 mm by 16,5 mm or 429 mm² due to the use of an amorphous lens array."


Probably mixing up N3 with N2, but I swear that I did heard talk about blackwell N3 monolitic die size being an issue, but a lot of it was from MooreLaw I think, which was saying that he was really big 5090 plan in the next segment or video.
OK yeah for N2 that is different, but the N2 process is being done on the EXE 5200 series and not the NXE 3600 and it's a whole other animal, pair that with a completely new transistor design bringing in GAA and new layer masking technologies and something that took the full 858mm2 on N3 with a few redesigns will fit well within that 429 with better clocks and lower power to show for it.
 
Wild, talk about massive returns for people who bought nvidia stock. Always been a sound investment.
 
I don't know if you got the message from the CEO. Nvidia is an AI company now. How does raster benefit them?
I'm not being facetious really.... the truth is Raster is no longer Nvidias mission statement. I mean have you not been paying attention to Nvidia for the past few years. The 4060 isn't faster then a 3060... its slower. Its true across the board not counting the top of the stack. Nvidia hasn't wanted to dedicate more of the Fab allotment then felt they needed to supply gamers. The 4090 is an anomoly. I would say unless something changes Nvidia isn't going to be dedicating that size of die to a gaming card again. The upcomming 4090 Super will be the last large gaming chip Nvidia will be willing to part with.

Hey I could be wrong... I would love to be wrong and see some uber Nvidia card in 2025. I just really don't see the AI boom slowing down. As I said responding to Duke AI isn't the Crypto boom. Actual things are being produced, I really don't see how in the next 3-4 years Nvidias position changes at all. They are going to be pressed on fab space at every turn, they are going to struggle to keep up with demand... and the internal pressures we can't see are going to exist as well. Jensen is going to even further gut the Nvidia gaming dept... that talent is needed on the AI front hardware and software side. They will keep a toe in with super cards... and maybe a blackwell refresh mid range tier of cards. Thing is IF (big if) AMD and or Intel where to manage a gaming product that pushes Nvidia. Jensen will just say nope no more. Or maybe worse... he releases something like a 5090 TI using the lowest end of the current AI silicon, and puts a price tag on it so painful even the most die hard Nvidia fans will tap.
Jensen: NVIDIA is now an AI company.

Internet nerds: NVIDIA says they don't care about graphics cards anymore!
 
Hello? Have you guys seen the cost of video cards? How about cloud computing anyone?
* "its data center division... up 279% over the same quarter last year."
* "its gaming division... increasing 81% year over year."

No surprises here.
 
The 4060 isn't faster then a 3060... its slower. Its true across the board not counting the top of the stack. Nvidia hasn't wanted to dedicate more of the Fab allotment then felt they needed to supply gamers. The 4090 is an anomoly.
That seem a bit hyperbolic. Raster only, a 4080 is quite faster than a 3080 and that would be true (just less and less) down to the 4060, the 4060 had a msrp significantly lower than the 3060 back in the days too:

relative-performance-2560-1440.png


And it is not because the company got bad at raster, it is much more like you are saying a mix of pricing themselve out of large volume, specially true above the 4060 and making really small very profitable card.

Look by how much (33%) 380mm of lovelace on 256 bit memory bus can beat 628mm of the 3090 Ampere on 384 bits at raster only task.

If the 4090 is not the fastest raster card on the market, I doubt its success would be close to be similar, transformer/AI can mimic being Turing-complete very well and even if game continue to be raster made it will be faked more and more, our current DLSS is only the start, texture, what water, snow, leaf or grass in the wind and other of the sort should look like in our screen will be vastly ML run soon, but I would expect the 5000 to again be faster at raster than the 4000.
 
That is some spin. The truth is Nvidia sells no more then they have to silicon wise to the low profit gaming market. Yes the real cards aren't for us anymore.

As for the 4090 I agree it is the fastest raster card that is obvious. Its sales right now are also going probably 50/50 to gamers and people hoovering them up to side step import restrictions. There is a reason why the 4090 pricing has been going up.

Releasing a new software feature that artificially only works on the latest card that isn't faster then the previous version to gain a "win"... isn't a win imo. It should be pretty telling. Nvidia doesn't think much of our intelligence.
 
That is some spin. The truth is Nvidia sells no more then they have to silicon wise to the low profit gaming market. Yes the real cards aren't for us anymore.

As for the 4090 I agree it is the fastest raster card that is obvious. Its sales right now are also going probably 50/50 to gamers and people hoovering them up to side step import restrictions. There is a reason why the 4090 pricing has been going up.

Releasing a new software feature that artificially only works on the latest card that isn't faster then the previous version to gain a "win"... isn't a win imo. It should be pretty telling. Nvidia doesn't think much of our intelligence.
This is why I miss real competition, because if Nvidia was intentionally selling sub par GPU’s you would think AMD would use it as an opportunity to walk all over them and gobble up market share. It’s not like gaming GPU’s are sold at a loss there is money to be made, and it’s obvious we’re willing to spend.
But they can’t because they are playing the same song but they’re over on stage 2 with Intel playing their opening act.
 
This is why I miss real competition, because if Nvidia was intentionally selling sub par GPU’s you would think AMD would use it as an opportunity to walk all over them and gobble up market share. It’s not like gaming GPU’s are sold at a loss there is money to be made, and it’s obvious we’re willing to spend.
But they can’t because they are playing the same song but they’re over on stage 2 with Intel playing their opening act.
No doubt real competition would either force Nvidia to make proper gaming cards... or officially exit the market. AMD has had decent mid range offerings... but to really compete they need to outright win a flagship fight at some point. Nothing less will change minds. They can release all the 5800 / 7900 type hardware they want... and you can argue they are the best cards in their price class. If they can't dethrone them at the top end though Nvidia can continue selling cut down cards that rely on generation locked DLSS tags.
 
Question: maybe not the correct place for this one but hear me out.
At what point does it make sense for the industry to push for GPUs to be socketed like CPUs are?
Because the GPUs as they exist now are pretty much a computer setup onto themselves, Processor, Motherboard, RAM, and a BIOS. And now with Direct Storage technologies, they have that too, and NVLink or xGMI is a networking component.

Or do I have that backward and the rest of the PC industry needs to start making the CPU, Motherboard, etc, systems we build now more GPU-like?

When does it start making financial sense for the CPU components and the GPU components to all start sharing some OAM style format connector so the industry as a whole can start sharing components and interfaces so it drives down costs as things can be standardized and simplified across the industry?
 
Question: maybe not the correct place for this one but hear me out.
At what point does it make sense for the industry to push for GPUs to be socketed like CPUs are?
Because the GPUs as they exist now are pretty much a computer setup onto themselves, Processor, Motherboard, RAM, and a BIOS. And now with Direct Storage technologies, they have that too, and NVLink or xGMI is a networking component.

Or do I have that backward and the rest of the PC industry needs to start making the CPU, Motherboard, etc, systems we build now more GPU-like?

When does it start making financial sense for the CPU components and the GPU components to all start sharing some OAM style format connector so the industry as a whole can start sharing components and interfaces so it drives down costs as things can be standardized and simplified across the industry?
You're the last person I figured I would have to give this response to, as it has come up from time to time for at least the last two decades. Usually in regards to things like, not wanting to have to rebuy vRAM over again, etc.

And basically it's two things:
1.) It is "socketable" in the sense that the PCI-E slot is the "socket".
2.) The card that GPU's reside on are too complex currently to reduce to a single type of socket. Part of the reason why both AMD and nVidia sell kit parts is because things like signaling are so complex that it requires very specific routing (they also want money, yes). And that's on a per card basis. Secondly GPU's have totally different pin outs on every GPU. There is no standardization there, especially not generation to generation. And that's because they inherently are incredibly complex. Standardization at the board level would just lead to stagnation, because it would create an artificial limitation that doesn't need to be there. You'd more or less have to "update the socket" every time there was a new GPU generation, defeating the purpose of the socket. And even then it would be hard to have multiple GPU's from the same generation use said socket.
I bring up this anecdote all the time too, which is I knew someone who did the technical documents for nvidia back in the early 00's (he even gave a little tour of nVidia corporate at the time). And the long and short there is that the pin-outs etc require a 100+ page documents to give to third parties just so they know how the GPU works and what all the pins do. And that is on a per card basis.

Right now the physical cards do not resemble each other at all, and cost totally different amounts. It's not hard to see that a 4090 board is significantly more complex than a 4060 board. Does it even make sense to overbuild a motherboard to support a 4090 level card if it will only ever get a 4050 in there? That's adding a lot of additional layers, stuff for signaling, etc, that aren't necessary. Motherboards already cost a ton. Do we want entry level boards to be $500+?

The only way I see GPU's becoming socketable like this is if a vendor controls the entire product stack and they for some reason have the CPU/GPU separate. An example of this would be nVidia's Grace Hopper which contains a CPU and GPU on a single board. I just don't think it makes sense even then for nVidia to separate the CPU and GPU components. And I'm not sure they even could, again from an interfacing standpoint.

For those reasons, I think we're either going to have more or less what we have now, or move to some sort of SOC. There isn't really any space in the middle. Apple is clearly on the SoC side. nVidia is trying to build essentially an SoC or perhaps SoB (system on a board). AMD is building SoC's for consumer, and theoretically will have a competitor to Grace Hopper at some point.
 
You're the last person I figured I would have to give this response to, as it has come up from time to time for at least the last two decades. Usually in regards to things like, not wanting to have to rebuy vRAM over again, etc.

And basically it's two things:
1.) It is "socketable" in the sense that the PCI-E slot is the "socket".
2.) The card that GPU's reside on are too complex currently to reduce to a single type of socket. Part of the reason why both AMD and nVidia sell kit parts is because things like signaling are so complex that it requires very specific routing (they also want money, yes). And that's on a per card basis. Secondly GPU's have totally different pin outs on every GPU. There is no standardization there, especially not generation to generation. And that's because they inherently are incredibly complex. Standardization at the board level would just lead to stagnation, because it would create an artificial limitation that doesn't need to be there. You'd more or less have to "update the socket" every time there was a new GPU generation, defeating the purpose of the socket. And even then it would be hard to have multiple GPU's from the same generation use said socket.
I bring up this anecdote all the time too, which is I knew someone who did the technical documents for nvidia back in the early 00's (he even gave a little tour of nVidia corporate at the time). And the long and short there is that the pin-outs etc require a 100+ page documents to give to third parties just so they know how the GPU works and what all the pins do. And that is on a per card basis.

Right now the physical cards do not resemble each other at all, and cost totally different amounts. It's not hard to see that a 4090 board is significantly more complex than a 4060 board. Does it even make sense to overbuild a motherboard to support a 4090 level card if it will only ever get a 4050 in there? That's adding a lot of additional layers, stuff for signaling, etc, that aren't necessary. Motherboards already cost a ton. Do we want entry level boards to be $500+?

The only way I see GPU's becoming socketable like this is if a vendor controls the entire product stack and they for some reason have the CPU/GPU separate. An example of this would be nVidia's Grace Hopper which contains a CPU and GPU on a single board. I just don't think it makes sense even then for nVidia to separate the CPU and GPU components. And I'm not sure they even could, again from an interfacing standpoint.

For those reasons, I think we're either going to have more or less what we have now, or move to some sort of SOC. There isn't really any space in the middle. Apple is clearly on the SoC side. nVidia is trying to build essentially an SoC or perhaps SoB (system on a board). AMD is building SoC's for consumer, and theoretically will have a competitor to Grace Hopper at some point.
I ask because I got dumped/volunteered with a stupidly frustrating problem not at all of my making and the only solution I can find makes everybody unhappy because they really have no idea what they signed up for but they are too far in to back out but too underfunded to proceed, and it's their hardware "requirements" that are killing them, so now I am trying to talk them down and figure out what they are actually trying to accomplish and if any of the hardware that they purchased incorrectly can be returned/resold.
This is what happens when a council appoints their nephew who is "really good with computers" as the IT manager for a large project... Bro is a line cook, who was a nurse who refused to get their shots.
BHA!
 
I ask because I got dumped/volunteered with a stupidly frustrating problem not at all of my making and the only solution I can find makes everybody unhappy because they really have no idea what they signed up for but they are too far in to back out but too underfunded to proceed, and it's their hardware "requirements" that are killing them, so now I am trying to talk them down and figure out what they are actually trying to accomplish and if any of the hardware that they purchased incorrectly can be returned/resold.
This is what happens when a council appoints their nephew who is "really good with computers" as the IT manager for a large project... Bro is a line cook, who was a nurse who refused to get their shots.
BHA!
Haha. Well that's a totally different, far a field thing there.
At least you're a good enough employee to try and come in and fix the problem rather than just say torch the whole thing to the ground and never start over.

But if people think some guy can do system engineering to build out a data center like what any of the top companies have just because they "know a guy" who is "good with computers" then they have what is coming to them. It's a supreme amount of ignorance in terms of education. Not that I know what your requirements are or that it's that level of disparate, just to say systems engineering goes waaay past just slapping a bunch of parts together into a PC (and I know you know that). Just laughing along side you in my own way.
 
Haha. Well that's a totally different, far a field thing there.
At least you're a good enough employee to try and come in and fix the problem rather than just say torch the whole thing to the ground and never start over.

But if people think some guy can do system engineering to build out a data center like what any of the top companies have just because they "know a guy" who is "good with computers" then they have what is coming to them. It's a supreme amount of ignorance in terms of education. Not that I know what your requirements are or that it's that level of disparate, just to say systems engineering goes waaay past just slapping a bunch of parts together into a PC (and I know you know that). Just laughing along side you in my own way.
Not even my job, Village Council got funding to do some Tech Startup initiatives to get better internet infrastructure in the area to attract business and blah blah blah, and they need to make some reports back to the province on how they are progressing and the answer is they haven't made a lot of progress at all.
So they called me up asked me to meet for lunch, gave me a breakdown, and asked how fucked they were and the answer is, probably not as bad as I am fearing it is, but the kid is way over his head, and when I was looking over all the photos of all the gear they have sprawled out over numerous tables, benches, and racks, I got to thinking Jesus could Intel, IBM, AMD, and Nvidia, not have gotten together and figured out how to make this not suck? And here we are.
They went in on some way over-the-top networking security stuff which is AI-driven and frankly overkill for their goals, and hardware for some Proxmox-based hosting services and it just kind of goes downhill from there. They wanted to run the network security suite in the Proxmox server infrastructure which means each member of the stack needs to have the AI capacity to do it, but at any given point that means 2/3'rds of the hardware sits idle, and the hardware itself is overkill by a lot.
 
Back
Top