Nvidia begins developing Arm-based PC chips in challenge to Intel

But NVidia and ARM is hardly new, if you count the server space.
 
The apple haters will love this news, while ripping apple arm in the same breath.
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.

Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.

If Apple can get some advances in their GPU that would be a great help to the platform.
 
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.

Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.

If Apple can get some advances in their GPU that would be a great help to the platform.
Apple has a huge advantage that generic platforms such as x86 or arm do not have.

They are a vertical company (another example is nvidia & cuda)

No generic arm design can hope to come within touching distance of a vertically integrated hardware/software

Apple will keep on 'innovating' & create new hardware and new api that takes advantage of new hardware. They don't care for backward compatibility. The devs have absolutely no say on this. (It is just like nvidia forces you to buy a new gpu every rime a new dlss version is released )
 
Apple has a huge advantage that generic platforms such as x86 or arm do not have.

They are a vertical company (another example is nvidia & cuda)

No generic arm design can hope to come within touching distance of a vertically integrated hardware/software

Apple will keep on 'innovating' & create new hardware and new api that takes advantage of new hardware. They don't care for backward compatibility. The devs have absolutely no say on this. (It is just like nvidia forces you to buy a new gpu every rime a new dlss version is released )
New feature of a new version of DLSS but existing features work for older cards in newer DLSS versions, DLSS Frame Generation != DLSS 3
 

Nvidia begins developing Arm-based PC chips in challenge to Intel

NVIDIA Corporation (NASDAQ:NVDA) has quietly started designing central processing units, in a move that fires a shot across the bow at Intel Corporation (NASDAQ:INTC), according to reporting from Reuters.
I could see an ARM PC if I was only browsing, doing emails and word processing but I have way too many games to want to switch to an ARM CPU. I also hardly ever use a laptop so the energy efficiency of an ARM has no relevance to me.
 
I could see an ARM PC if I was only browsing, doing emails and word processing but I have way too many games to want to switch to an ARM CPU. I also hardly ever use a laptop so the energy efficiency of an ARM has no relevance to me.
The ARM to x86 translation layers are coming along nicely. X86 is very well documented and ARM is very flexible.
Microsoft’s own efforts here have something like an 80% mapping with a 2% overhead, such a far fetched idea that this could happen in the next couple of years in a near seamless manner.
 
But NVidia and ARM is hardly new, if you count the server space.
Or the manufacturing or robotics industries.
Nvidia has a lot of arm silicon out there from the self driving floor burnisher’s to automated farming equipment, and many of the robots used for manufacturing and assembly.
 
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.
Turns out, Apple can't just steal... I mean license Imaginations PowerVR GPU design and ride that for very long. Takes a lot of money and time to make a proper modern GPU.
Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.
That's what happened when Apple first introduced their M1 chips where they used 5nm while AMD was on 7nm and Intel was on maybe 14nm? Turns out chip manufacturing matters a lot.

They are a vertical company (another example is nvidia & cuda)

No generic arm design can hope to come within touching distance of a vertically integrated hardware/software

Apple will keep on 'innovating' & create new hardware and new api that takes advantage of new hardware. They don't care for backward compatibility. The devs have absolutely no say on this. (It is just like nvidia forces you to buy a new gpu every rime a new dlss version is released )
Let me know when that vertical thing happens for Apple. Most developers still use MoltenVK for Apple's metal API. Also, CUDA for Nvidia is great so long as you buy Nvidia. The moment you want to jump on a competitors product, that CUDA advantage becomes a hindrance.
 
Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.

As someone who almost always with almost every system finds myself trying to do something unsupported the developers never intended, and frustrated by such limitations, this is why any Apple product is a "never buy" proposition for me.

I'd be tearing my hair out in a week flat when I tried to do something that wasn't following Apples yellow brick road.

I need technology that is customizable to suit my needs and desires. I will NEVER alter my needs or desires to suit the technology, and Apples "one size fits all" approach can simply fuck right off.
 
Let me know when that vertical thing happens for Apple. Most developers still use MoltenVK for Apple's metal API. Also, CUDA for Nvidia is great so long as you buy Nvidia. The moment you want to jump on a competitors product, that CUDA advantage becomes a hindrance.
MoltenVK adds at worst a 2% overhead.
And most current development tools has Metal built in, because those multi-platform developer tools don't have you creating the same call in 3 different graphics API's you do it once and it's all translated in the toolsets' back end.
But similarly, almost nobody programs in Vulkan most use Logi, and most don't do DX12 either for that they use Link.

Very few developers are working in the native low-level APIs, they are using wrappers upon wrappers upon wrappers.
 
The moment you want to jump on a competitors product, that CUDA advantage becomes a hindrance.

1698266695697.png
 
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.

Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.

If Apple can get some advances in their GPU that would be a great help to the platform.
You don't buy a macbook to play videogames on. For everything other than videogames, they are completely superior.
 
You don't buy a macbook to play videogames on. For everything other than videogames, they are completely superior.
Well Blender, Adobe, numerous video tools, art tools, design tools, the GPU there does well enough, but it’s not what I would call great.
Apple is making a pitch at Disney for the virtual stages and Epyc got that working for the Unreal Engine so they can but performance there is pretty bad because currently it’s a Threadripper and an Nvidia RTX 6000, Apples GPU is beefy for the size but it’s not that beefy.
 
MoltenVK adds at worst a 2% overhead.
And most current development tools has Metal built in, because those multi-platform developer tools don't have you creating the same call in 3 different graphics API's you do it once and it's all translated in the toolsets' back end.
But similarly, almost nobody programs in Vulkan most use Logi, and most don't do DX12 either for that they use Link.

Very few developers are working in the native low-level APIs, they are using wrappers upon wrappers upon wrappers.

What the hell is Logi and Link

You'll find Vulkan and DX12 renderers everywhere. Go pick your favorite emulator and there's probably a Vulkan renderer at this point.
 
What the hell is Logi and Link

You'll find Vulkan and DX12 renderers everywhere. Go pick your favorite emulator and there's probably a Vulkan renderer at this point.
They were the popular Wrapper API’s for Vulkan and DX12 though it seems they have been supplanted since so I may have some updates to consider for a few of the labs.
 
Logi and Link are the wrappers used by a lot of development studios so instead of writing Vulkan and DX12 directly your writing to those API.

Looks like Logi has been supplanted by VKFS.
https://github.com/MHDtA-dev/VKFS

I cannot find even a single mention of either - this thing has like 4 stars on GH and does practically nothing.

bgfx and nvrhi are some well known libraries and they barely have traction
https://github.com/NVIDIAGameWorks/nvrhi
https://github.com/bkaradzic/bgfx

WebGPU is interesting, and there's libraries that provide implementations with Vulkan, DX12, and Metal backends...
https://github.com/gfx-rs/wgpu-native

... but almost everyone basically just implements their own shit.
 
The ARM to x86 translation layers are coming along nicely. X86 is very well documented and ARM is very flexible.
Microsoft’s own efforts here have something like an 80% mapping with a 2% overhead, such a far fetched idea that this could happen in the next couple of years in a near seamless manner.
Not interested until they show 100% compatibility with x86, AMD64, and Intel 64 with minimal performance loss in actual testing.
 
Every mediatek or exynos or unisoc I’ve ever used is crap compared to Qualcomm counterparts in smoothness and performance (exynos would be 2nd best and the last are tied for last place)
 
Every mediatek or exynos or unisoc I’ve ever used is crap compared to Qualcomm counterparts in smoothness and performance (exynos would be 2nd best and the last are tied for last place)
Still maybe Nvidia can whip the Dimensity series into shape. Nvidia knows a thing or 3 about making ARM CPUs and GPUs MediaTek, brings the manufacturing and distribution. MediaTeks existing stuff may not be top of the line but they sell more of it than anybody else.
 
If Nvidia manages to pull this off, then the nascent handheld gaming market, dominated by AMD, could see stiff competition.

Incentive for nvidia is to get technologies such as DLSS 3.5 into the handheld gaming market
 
If Nvidia manages to pull this off, then the nascent handheld gaming market, dominated by AMD, could see stiff competition.

Incentive for nvidia is to get technologies such as DLSS 3.5 into the handheld gaming market
I'm all for Nvidia bringing back their Tegra products, but they failed for a reason. They had games ported to their products, but most of them were from over a decade and half ago. Nvidia stepped out because Apple, Qualcomm, and Samsung were extremely competitive. Nvidia's ARM chips were slower and consumed more power. DLSS won't do anything since they can all use FSR. Also, if Valve wanted to use an ARM Soc for the Steam Deck, then they would have done so. There's a reason why Valve used AMD and not Qualcomm.
 
I'm all for Nvidia bringing back their Tegra products, but they failed for a reason. They had games ported to their products, but most of them were from over a decade and half ago. Nvidia stepped out because Apple, Qualcomm, and Samsung were extremely competitive. Nvidia's ARM chips were slower and consumed more power. DLSS won't do anything since they can all use FSR. Also, if Valve wanted to use an ARM Soc for the Steam Deck, then they would have done so. There's a reason why Valve used AMD and not Qualcomm.
In what way has Tegra failed?
Nvidia sells a crapload of them, they absolutely dominate the market space.
 
Last edited:
I’m what way has Tegra failed?
Nvidia sells a crapload of them, they absolutely dominate the market space.
When was the last time you saw a tablet or smart phone with an Nvidia SoC in it? Other than the Nintendo Switch and cars, it's mostly dead. Nvidia had to find niches for their SoC's because the ARM market is fierce and saturated. This is why AMD still hasn't made their own ARM based chips, because who would buy it? Apple makes their own chips. Samsung makes their own chips and sometimes buys from Qualcomm. Qualcomm has patents that are still causing issues.
 
When was the last time you saw a tablet or smart phone with an Nvidia SoC in it? Other than the Nintendo Switch and cars, it's mostly dead. Nvidia had to find niches for their SoC's because the ARM market is fierce and saturated. This is why AMD still hasn't made their own ARM based chips, because who would buy it? Apple makes their own chips. Samsung makes their own chips and sometimes buys from Qualcomm. Qualcomm has patents that are still causing issues.
I have 6 automatic floor burnishers that are powered by Tegra.
Tegra has more GPU than most could possibly use in a phone or a tablet and the Android ecosystem is fractured enough without throwing yet another GPU architecture into the ring, Mali is already broken enough without AMD and Nvidia throwing their hats into the pile, I mean just look at the state of the AMD Samsung partnership for Exynos, god damned mess that is.

Put a modern Jetson in an Android Tablet, or a Chromebook, and watch the GPU never go past 15% utilization over a 5 to 7 year life cycle, there's nothing there to make use of it. So that needs a new product, but why bother competing against Qualcomm, or Broadcom at that scale, and why spend the money to develop such a small cutdown GPU architecture to then get into a bidding war to the bottom for market share against 3 entrenched players who are huge and likely have continuing supply contracts.

Nvidia's answer to that is their partnership with MediaTek, which gets them into the market, with somebody who already has supply contracts, experience working with the AIBs and OEMs, and has known functional SoCs. Average at best SoCs but that is something that can be fixed, getting the OEMs to change vendors is hard, No matter what Nvidia throws into the ring, Samsung and Google won't switch from their SoCs in their top-end phones, so that leaves Nvidia playing second fiddle on the mid-range offerings, that isn't something they want to even bother with.

That's one of the reasons Nvidia fell out of the market back in 2016 Qualcomm and Broadcom were able to drastically undercut them with much weaker GPU offerings because nothing on the platform was taking advantage of those GPUs that Nvidia was putting out there in the Tegra K1 packages. Intel also went to the mat on pricing for their Celeron N series stuff because that whole you cant offer bundles thing that AMD took them to court on was overturned turns out they totally can and back to selling chips for peanuts in bulk they went.
 
Last edited:
Look like it could be getting close to be out (maybe by 2025):
https://www.guru3d.com/story/rumor-...e-on-development-of-armbased-ai-pc-processor/

The design finalization, or tape out, of the processor is anticipated to occur in the third quarter of 2024, with subsequent verification processes scheduled for the fourth quarter. The processor is projected to have a market introduction price of approximately $300.

Maybe by then Windows on ARM will be quite good enough for a lot of stuff.
 
But NVidia and ARM is hardly new, if you count the server space.

I mean, Nvidia has had ARM based development boards around forever. A decade? I forget what they call them. Jetson? Is that it?

I think moving the industry to ARM is a mistake, for a few reasons.

1.) Instruction set is technically irrelevant.

You can build a power efficient x86 chip or you can build a bloated an inefficient ARM chip. Yes x86 does have a built in performance penalty compared to more RISC based designs because it needs a decode stage to decode the instructions and turn them into RISC-like micro-ops, but the impact of this decode stage is less and less relevant every day.

Anyone remember the Asus Zenfone 2?

1715696645704.png


It was a pretty damn decent mid range Android phone.

Oh, and it was powered by an Intel Moorefield Atom x86 quad core SoC. I had one. It performed great, and power wise it had just as usable battery life as it's ARM-based brethren. I bought it as a disposable phone when I was in Brazil (just in case I got robbed, Gringos tend to get robbed in Brazil :p ) but I wound up keeping it when I got home from Brazil as I liked it more than my Motorola Droid Turbo I had at the time. The software was so-so and bloated, but custom ROM's fixed that. The hardware was excellent (except for maybe the camera)

And this was Intel's first foray into phones. Had they determined that it was worth their investment, I bet they could have improved it even more, but it turns out they really liked the profit margins in the PC space which they (at the time in 2014) still more or less controlled the market and could charge whatever they wanted to. It was a bit of a blunder on their part not pushing that harder (though knowing what we know now about the failure of their 10nm process) it would likely have died when they couldn't provide a competitive process anyway.

Instruction set pretty much only matters for binary compatibility these days. You can do almost anything, big or small, with almost any instruction set, provided you have enough bits for addressing.

What really matters is the underlying architecture of the chip design that supports that instruction set. That - more so than instruction set - is what determines if a chip performs well, or is efficient enough. From this technical ability perspective, Instruction set (x86 vs ARM vs RISC-V vs others) is really just a distraction.


2.) So why be interested in instruction set then?

Because it is all about control.

Instruction set is legally important. Who owns the intellectual property? Can they use it to lock the competition out, or lock their customer base in, or do any other borderline illegal market manipulations?

Intel has been doing this for ages, suing everyone and everything in order to maintain control over the PC market. They got so obsessed with it that they got distracted and really missed the whole mobile (or well, mobile smaller than laptops) and embedded IoT markets.

If the market as a whole is open to a shift in instruction set, using that opportunity to move from x86 to ARM is a terrible waste of an opportunity. At least from the consumers perspective. We'd be moving from one proprietary instruction set which has been the poster child for pseudo-legal market manipulations to harm consumers and competition, to another proprietary instruction set which while on the surface looks more free and open, is actually much worse.

Sure, ARM Holdings licenses their cores to anyone willing to pay. They have the potential to be a bad actor, but they haven't been thus far. Maybe we can trust them? As foolish and shortsighted as the concept of trust is when business and money are involved, the problem here isn't ARM itself, but its licensees. Let me explain.

The ARM license allows for customization. Licensees can and do customize the ARM designs such that it breaks binary compatibility, and et viola, instead of having a Intel/AMD duopoly, now every single little ARM licensee can make their own proprietary chip design that only runs their software and the consumer has no control over what so ever. Do as we say/want or go pound sand.

And while I like the concept of RISC-V being an open source instruction set, it does the same thing. Anyone can customize the instruction set as they see fit, and it can be used to proprietarize hardware and harm users and customers by not allowing them to use their own hardware that they own as they see fit.


3.) What to do about it?

I don't know. Things get worse every year. Every year we go further down the path of proprietary locked bullshit.

Personally, I would like a law that requires unlocked boot-loaders and binary compatibility for every device made available to consumers or off the shelf for enterprise, with the only exception being for highly specialized devices that place additional requirements on the hardware, such that a large market volume binary compatible CPU isn't possible.

Unfortunately this will - however - never pass the corrupt lobbyists who make their money from companies who rake in the billions by manipulating markets and abusing customers.

So, things are just going to get worse. Rather than a general purpose computer you can do whatever you want with, more and more you are going to be restricted, locked down and limited until you have no choices left at all.

Don't like telemetry and data collection on your fancy brand new laptop? Get pissed off that the manufacturer now forces you to watch ads every 5 minutes? Tough luck.

In the past you could at least run Linux or some other community software/OS, but now either your boot loader is cryptographically locked, or you have a custom ISA that prevents binary compatibility with anyhting else.

This is the way we are headed. If - that is - you will even be offered a computer to buy in the first place, and it doesn't all just become a cloud based subscription model.

The dystopia is real.
 
You can build a power efficient x86 chip or you can build a bloated an inefficient ARM chip. Yes x86 does have a built in performance penalty compared to more RISC based designs because it needs a decode stage to decode the instructions and turn them into RISC-like micro-ops, but the impact of this decode stage is less and less relevant every day.
ARM and what we call RISK now do the same:
12_575px.png
aplpe A14:
Firestorm_575px.png


Outside the simplest of simplest chips, a decoding to Micro-ops stage will always be present (and always been for x86 cpus since the 70s)

With modern CPU you probably want that stage and can be a net benefit to use most of your alu-pipeline most of the time not a cost, there so much ALU and pipeline complexity some flexibility to which Mops to do and their order will be wanted despite the security issue it can open, you can choose and optimise what with you do with your MOPS, it will be extremelly basic because how fast it fly, but for common scenario.

Casey Muratori, went over a nice explanation on the primeagen

View: https://www.youtube.com/watch?v=xCBrtopAG80
 
Last edited:
I mean, Nvidia has had ARM based development boards around forever. A decade? I forget what they call them. Jetson? Is that it?

I think moving the industry to ARM is a mistake, for a few reasons.

1.) Instruction set is technically irrelevant.

You can build a power efficient x86 chip or you can build a bloated an inefficient ARM chip. Yes x86 does have a built in performance penalty compared to more RISC based designs because it needs a decode stage to decode the instructions and turn them into RISC-like micro-ops, but the impact of this decode stage is less and less relevant every day.

Anyone remember the Asus Zenfone 2?

View attachment 653694

It was a pretty damn decent mid range Android phone.

Oh, and it was powered by an Intel Moorefield Atom x86 quad core SoC. I had one. It performed great, and power wise it had just as usable battery life as it's ARM-based brethren. I bought it as a disposable phone when I was in Brazil (just in case I got robbed, Gringos tend to get robbed in Brazil :p ) but I wound up keeping it when I got home from Brazil as I liked it more than my Motorola Droid Turbo I had at the time. The software was so-so and bloated, but custom ROM's fixed that. The hardware was excellent (except for maybe the camera)

And this was Intel's first foray into phones. Had they determined that it was worth their investment, I bet they could have improved it even more, but it turns out they really liked the profit margins in the PC space which they (at the time in 2014) still more or less controlled the market and could charge whatever they wanted to. It was a bit of a blunder on their part not pushing that harder (though knowing what we know now about the failure of their 10nm process) it would likely have died when they couldn't provide a competitive process anyway.

Instruction set pretty much only matters for binary compatibility these days. You can do almost anything, big or small, with almost any instruction set, provided you have enough bits for addressing.

What really matters is the underlying architecture of the chip design that supports that instruction set. That - more so than instruction set - is what determines if a chip performs well, or is efficient enough. From this technical ability perspective, Instruction set (x86 vs ARM vs RISC-V vs others) is really just a distraction.


2.) So why be interested in instruction set then?

Because it is all about control.

Instruction set is legally important. Who owns the intellectual property? Can they use it to lock the competition out, or lock their customer base in, or do any other borderline illegal market manipulations?

Intel has been doing this for ages, suing everyone and everything in order to maintain control over the PC market. They got so obsessed with it that they got distracted and really missed the whole mobile (or well, mobile smaller than laptops) and embedded IoT markets.

If the market as a whole is open to a shift in instruction set, using that opportunity to move from x86 to ARM is a terrible waste of an opportunity. At least from the consumers perspective. We'd be moving from one proprietary instruction set which has been the poster child for pseudo-legal market manipulations to harm consumers and competition, to another proprietary instruction set which while on the surface looks more free and open, is actually much worse.

Sure, ARM Holdings licenses their cores to anyone willing to pay. They have the potential to be a bad actor, but they haven't been thus far. Maybe we can trust them? As foolish and shortsighted as the concept of trust is when business and money are involved, the problem here isn't ARM itself, but its licensees. Let me explain.

The ARM license allows for customization. Licensees can and do customize the ARM designs such that it breaks binary compatibility, and et viola, instead of having a Intel/AMD duopoly, now every single little ARM licensee can make their own proprietary chip design that only runs their software and the consumer has no control over what so ever. Do as we say/want or go pound sand.

And while I like the concept of RISC-V being an open source instruction set, it does the same thing. Anyone can customize the instruction set as they see fit, and it can be used to proprietarize hardware and harm users and customers by not allowing them to use their own hardware that they own as they see fit.


3.) What to do about it?

I don't know. Things get worse every year. Every year we go further down the path of proprietary locked bullshit.

Personally, I would like a law that requires unlocked boot-loaders and binary compatibility for every device made available to consumers or off the shelf for enterprise, with the only exception being for highly specialized devices that place additional requirements on the hardware, such that a large market volume binary compatible CPU isn't possible.

Unfortunately this will - however - never pass the corrupt lobbyists who make their money from companies who rake in the billions by manipulating markets and abusing customers.

So, things are just going to get worse. Rather than a general purpose computer you can do whatever you want with, more and more you are going to be restricted, locked down and limited until you have no choices left at all.

Don't like telemetry and data collection on your fancy brand new laptop? Get pissed off that the manufacturer now forces you to watch ads every 5 minutes? Tough luck.

In the past you could at least run Linux or some other community software/OS, but now either your boot loader is cryptographically locked, or you have a custom ISA that prevents binary compatibility with anyhting else.

This is the way we are headed. If - that is - you will even be offered a computer to buy in the first place, and it doesn't all just become a cloud based subscription model.

The dystopia is real.

Nobody is going to be making arbitrary changes to established instruction sets for broad market general purpose use.

They fuck up the entire software ecosystem, hurt their own performance, and violently increase maintenance. It's pointless.

Great now they need to maintain a proprietary fork of some compiler, contend with dumpstering millions of man hours that went into tooling like optimizers, the kernel code that probably has zero hope of ever getting mainlined, and 8 million other little things...

Or they just deliver the software at the edge and call it a day and it's easier for literally everyone.
 
Nobody is going to be making arbitrary changes to established instruction sets for broad market general purpose use.

They fuck up the entire software ecosystem, hurt their own performance, and violently increase maintenance. It's pointless.
Didn't apple do this ? Not big but a short list of new custom instructions

https://opensource.apple.com/source...st/MC/ARM/arm-memory-instructions.s.auto.html

https://developer.apple.com/documentation/xcode/writing-arm64-code-for-apple-platforms
Apple platforms diverge from the standard 64-bit ARM architecture in a few specific ways. Apart from these small differences, iOS, tvOS, and macOS adhere to the rest of the 64-bit ARM specification. For information about the ARM64 specification, including the Procedure Call Standard for the ARM 64-bit Architecture (AArch64), go to

I can imagine a nobody but the Apple of the world....being quite true too.
 
They don't break ARMv8 or ARMv9 compatibility, it'll natively consume either with zero qualms.
 
They don't break ARMv8 or ARMv9 compatibility, it'll natively consume either with zero qualms.
I did read like if it was about them adding stuff than the other would not be able to run anymore so people must buy their hardware to run their software, not for them to remove from themselve the ability to run application library.

I was wrong rereading it, I guess it would change for people that make money selling hardware like Apple versus those who does not with terrible margin and must sell software on them....

Would it do it the other way, Apple binary that use those extra instruction compiled with the apple triple target on gcc-clang not running natively out of the box on other non apple arm chips ? Or they will simply software emulate those and still run just slower ?
 
The only noteworthy Apple extensions I see are mostly private/undocumented, or just implementation details you're not intended to need to care about under normal circumstances.

Which is to say, whatever it is, is probably abstracted away behind a library. If they value combability, the library will just have a fallback - no different than you see with implementations of the CRT and whatnot where it'll just pick the fastest implementation of a function like memcpy() at runtime, like AVX vs SSE.
 
If they value combability,
But why would they mind even at all if their binary run on other platform ?

It is not like it is uncommon (that why people usually care about open source after all, how often you better compile specifically for your target...)
 
But why would they mind even at all if their binary run on other platform ?

It is not like it is uncommon (that why people usually care about open source after all, how often you better compile specifically for your target...)

I mean, if you're relying purely on an Apple library for Apple software, I don't think you have much expectation to run elsewhere in the immediate term.

If said library has no means to fallback and happens to use an instruction set they added and subsequently removed at some point, software using it is going to crash.

Apple could just throw their hands up and say oh well, stop using old shit and use an updated build of whatever.
Or they just have a graceful runtime fallback so when you upgrade to the M69 it doesn't explode in the first place.

Neither would surprise me, given Apple.
 
subsequently removed at some point,
Why would they ever remove said instruction, it seem to be a very small set of extra one.

I mean, if you're relying purely on an Apple library for Apple software, I don't think you have much expectation to run elsewhere in the immediate term.
Even if we do not use any apple library (say a pure STL C project, nothing custom to MacOS), the compilers that target Apple Silicon could use those extra instruction, no ?
 
Last edited:
Back
Top