Nvidia begins developing Arm-based PC chips in challenge to Intel

But NVidia and ARM is hardly new, if you count the server space.
 
The apple haters will love this news, while ripping apple arm in the same breath.
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.

Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.

If Apple can get some advances in their GPU that would be a great help to the platform.
 
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.

Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.

If Apple can get some advances in their GPU that would be a great help to the platform.
Apple has a huge advantage that generic platforms such as x86 or arm do not have.

They are a vertical company (another example is nvidia & cuda)

No generic arm design can hope to come within touching distance of a vertically integrated hardware/software

Apple will keep on 'innovating' & create new hardware and new api that takes advantage of new hardware. They don't care for backward compatibility. The devs have absolutely no say on this. (It is just like nvidia forces you to buy a new gpu every rime a new dlss version is released )
 
Apple has a huge advantage that generic platforms such as x86 or arm do not have.

They are a vertical company (another example is nvidia & cuda)

No generic arm design can hope to come within touching distance of a vertically integrated hardware/software

Apple will keep on 'innovating' & create new hardware and new api that takes advantage of new hardware. They don't care for backward compatibility. The devs have absolutely no say on this. (It is just like nvidia forces you to buy a new gpu every rime a new dlss version is released )
New feature of a new version of DLSS but existing features work for older cards in newer DLSS versions, DLSS Frame Generation != DLSS 3
 

Nvidia begins developing Arm-based PC chips in challenge to Intel

NVIDIA Corporation (NASDAQ:NVDA) has quietly started designing central processing units, in a move that fires a shot across the bow at Intel Corporation (NASDAQ:INTC), according to reporting from Reuters.
I could see an ARM PC if I was only browsing, doing emails and word processing but I have way too many games to want to switch to an ARM CPU. I also hardly ever use a laptop so the energy efficiency of an ARM has no relevance to me.
 
I could see an ARM PC if I was only browsing, doing emails and word processing but I have way too many games to want to switch to an ARM CPU. I also hardly ever use a laptop so the energy efficiency of an ARM has no relevance to me.
The ARM to x86 translation layers are coming along nicely. X86 is very well documented and ARM is very flexible.
Microsoft’s own efforts here have something like an 80% mapping with a 2% overhead, such a far fetched idea that this could happen in the next couple of years in a near seamless manner.
 
But NVidia and ARM is hardly new, if you count the server space.
Or the manufacturing or robotics industries.
Nvidia has a lot of arm silicon out there from the self driving floor burnisher’s to automated farming equipment, and many of the robots used for manufacturing and assembly.
 
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.
Turns out, Apple can't just steal... I mean license Imaginations PowerVR GPU design and ride that for very long. Takes a lot of money and time to make a proper modern GPU.
Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.
That's what happened when Apple first introduced their M1 chips where they used 5nm while AMD was on 7nm and Intel was on maybe 14nm? Turns out chip manufacturing matters a lot.

They are a vertical company (another example is nvidia & cuda)

No generic arm design can hope to come within touching distance of a vertically integrated hardware/software

Apple will keep on 'innovating' & create new hardware and new api that takes advantage of new hardware. They don't care for backward compatibility. The devs have absolutely no say on this. (It is just like nvidia forces you to buy a new gpu every rime a new dlss version is released )
Let me know when that vertical thing happens for Apple. Most developers still use MoltenVK for Apple's metal API. Also, CUDA for Nvidia is great so long as you buy Nvidia. The moment you want to jump on a competitors product, that CUDA advantage becomes a hindrance.
 
Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.

As someone who almost always with almost every system finds myself trying to do something unsupported the developers never intended, and frustrated by such limitations, this is why any Apple product is a "never buy" proposition for me.

I'd be tearing my hair out in a week flat when I tried to do something that wasn't following Apples yellow brick road.

I need technology that is customizable to suit my needs and desires. I will NEVER alter my needs or desires to suit the technology, and Apples "one size fits all" approach can simply fuck right off.
 
Let me know when that vertical thing happens for Apple. Most developers still use MoltenVK for Apple's metal API. Also, CUDA for Nvidia is great so long as you buy Nvidia. The moment you want to jump on a competitors product, that CUDA advantage becomes a hindrance.
MoltenVK adds at worst a 2% overhead.
And most current development tools has Metal built in, because those multi-platform developer tools don't have you creating the same call in 3 different graphics API's you do it once and it's all translated in the toolsets' back end.
But similarly, almost nobody programs in Vulkan most use Logi, and most don't do DX12 either for that they use Link.

Very few developers are working in the native low-level APIs, they are using wrappers upon wrappers upon wrappers.
 
The moment you want to jump on a competitors product, that CUDA advantage becomes a hindrance.

1698266695697.png
 
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.

Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.

If Apple can get some advances in their GPU that would be a great help to the platform.
You don't buy a macbook to play videogames on. For everything other than videogames, they are completely superior.
 
You don't buy a macbook to play videogames on. For everything other than videogames, they are completely superior.
Well Blender, Adobe, numerous video tools, art tools, design tools, the GPU there does well enough, but it’s not what I would call great.
Apple is making a pitch at Disney for the virtual stages and Epyc got that working for the Unreal Engine so they can but performance there is pretty bad because currently it’s a Threadripper and an Nvidia RTX 6000, Apples GPU is beefy for the size but it’s not that beefy.
 
MoltenVK adds at worst a 2% overhead.
And most current development tools has Metal built in, because those multi-platform developer tools don't have you creating the same call in 3 different graphics API's you do it once and it's all translated in the toolsets' back end.
But similarly, almost nobody programs in Vulkan most use Logi, and most don't do DX12 either for that they use Link.

Very few developers are working in the native low-level APIs, they are using wrappers upon wrappers upon wrappers.

What the hell is Logi and Link

You'll find Vulkan and DX12 renderers everywhere. Go pick your favorite emulator and there's probably a Vulkan renderer at this point.
 
What the hell is Logi and Link

You'll find Vulkan and DX12 renderers everywhere. Go pick your favorite emulator and there's probably a Vulkan renderer at this point.
They were the popular Wrapper API’s for Vulkan and DX12 though it seems they have been supplanted since so I may have some updates to consider for a few of the labs.
 
Logi and Link are the wrappers used by a lot of development studios so instead of writing Vulkan and DX12 directly your writing to those API.

Looks like Logi has been supplanted by VKFS.
https://github.com/MHDtA-dev/VKFS

I cannot find even a single mention of either - this thing has like 4 stars on GH and does practically nothing.

bgfx and nvrhi are some well known libraries and they barely have traction
https://github.com/NVIDIAGameWorks/nvrhi
https://github.com/bkaradzic/bgfx

WebGPU is interesting, and there's libraries that provide implementations with Vulkan, DX12, and Metal backends...
https://github.com/gfx-rs/wgpu-native

... but almost everyone basically just implements their own shit.
 
The ARM to x86 translation layers are coming along nicely. X86 is very well documented and ARM is very flexible.
Microsoft’s own efforts here have something like an 80% mapping with a 2% overhead, such a far fetched idea that this could happen in the next couple of years in a near seamless manner.
Not interested until they show 100% compatibility with x86, AMD64, and Intel 64 with minimal performance loss in actual testing.
 
Every mediatek or exynos or unisoc I’ve ever used is crap compared to Qualcomm counterparts in smoothness and performance (exynos would be 2nd best and the last are tied for last place)
 
Every mediatek or exynos or unisoc I’ve ever used is crap compared to Qualcomm counterparts in smoothness and performance (exynos would be 2nd best and the last are tied for last place)
Still maybe Nvidia can whip the Dimensity series into shape. Nvidia knows a thing or 3 about making ARM CPUs and GPUs MediaTek, brings the manufacturing and distribution. MediaTeks existing stuff may not be top of the line but they sell more of it than anybody else.
 
If Nvidia manages to pull this off, then the nascent handheld gaming market, dominated by AMD, could see stiff competition.

Incentive for nvidia is to get technologies such as DLSS 3.5 into the handheld gaming market
 
If Nvidia manages to pull this off, then the nascent handheld gaming market, dominated by AMD, could see stiff competition.

Incentive for nvidia is to get technologies such as DLSS 3.5 into the handheld gaming market
I'm all for Nvidia bringing back their Tegra products, but they failed for a reason. They had games ported to their products, but most of them were from over a decade and half ago. Nvidia stepped out because Apple, Qualcomm, and Samsung were extremely competitive. Nvidia's ARM chips were slower and consumed more power. DLSS won't do anything since they can all use FSR. Also, if Valve wanted to use an ARM Soc for the Steam Deck, then they would have done so. There's a reason why Valve used AMD and not Qualcomm.
 
I'm all for Nvidia bringing back their Tegra products, but they failed for a reason. They had games ported to their products, but most of them were from over a decade and half ago. Nvidia stepped out because Apple, Qualcomm, and Samsung were extremely competitive. Nvidia's ARM chips were slower and consumed more power. DLSS won't do anything since they can all use FSR. Also, if Valve wanted to use an ARM Soc for the Steam Deck, then they would have done so. There's a reason why Valve used AMD and not Qualcomm.
In what way has Tegra failed?
Nvidia sells a crapload of them, they absolutely dominate the market space.
 
Last edited:
I’m what way has Tegra failed?
Nvidia sells a crapload of them, they absolutely dominate the market space.
When was the last time you saw a tablet or smart phone with an Nvidia SoC in it? Other than the Nintendo Switch and cars, it's mostly dead. Nvidia had to find niches for their SoC's because the ARM market is fierce and saturated. This is why AMD still hasn't made their own ARM based chips, because who would buy it? Apple makes their own chips. Samsung makes their own chips and sometimes buys from Qualcomm. Qualcomm has patents that are still causing issues.
 
When was the last time you saw a tablet or smart phone with an Nvidia SoC in it? Other than the Nintendo Switch and cars, it's mostly dead. Nvidia had to find niches for their SoC's because the ARM market is fierce and saturated. This is why AMD still hasn't made their own ARM based chips, because who would buy it? Apple makes their own chips. Samsung makes their own chips and sometimes buys from Qualcomm. Qualcomm has patents that are still causing issues.
I have 6 automatic floor burnishers that are powered by Tegra.
Tegra has more GPU than most could possibly use in a phone or a tablet and the Android ecosystem is fractured enough without throwing yet another GPU architecture into the ring, Mali is already broken enough without AMD and Nvidia throwing their hats into the pile, I mean just look at the state of the AMD Samsung partnership for Exynos, god damned mess that is.

Put a modern Jetson in an Android Tablet, or a Chromebook, and watch the GPU never go past 15% utilization over a 5 to 7 year life cycle, there's nothing there to make use of it. So that needs a new product, but why bother competing against Qualcomm, or Broadcom at that scale, and why spend the money to develop such a small cutdown GPU architecture to then get into a bidding war to the bottom for market share against 3 entrenched players who are huge and likely have continuing supply contracts.

Nvidia's answer to that is their partnership with MediaTek, which gets them into the market, with somebody who already has supply contracts, experience working with the AIBs and OEMs, and has known functional SoCs. Average at best SoCs but that is something that can be fixed, getting the OEMs to change vendors is hard, No matter what Nvidia throws into the ring, Samsung and Google won't switch from their SoCs in their top-end phones, so that leaves Nvidia playing second fiddle on the mid-range offerings, that isn't something they want to even bother with.

That's one of the reasons Nvidia fell out of the market back in 2016 Qualcomm and Broadcom were able to drastically undercut them with much weaker GPU offerings because nothing on the platform was taking advantage of those GPUs that Nvidia was putting out there in the Tegra K1 packages. Intel also went to the mat on pricing for their Celeron N series stuff because that whole you cant offer bundles thing that AMD took them to court on was overturned turns out they totally can and back to selling chips for peanuts in bulk they went.
 
Last edited:
Back
Top