Nvidia expresses interest in SoftBank's chip company Arm Holdings: Bloomberg News

Status
Not open for further replies.
Okay.

x86 is an invention that built on prevailing CISC ideas. RISC is an invention that built upon criticisms of CISC. The P6 core is an invention that blended the two. IA64 is an invention built to address criticisms of both CISC and RISC.

x86-64 is an extension of IA32 to 64bit.

You claimed that it is a 'new ISA'. That's hyperbole.

That's semantics not hyperbole.

x86 was born out of the Computer Terminal Corporation with IBM, Ti, intel and others. We'll just discount the history of computing because intel did it all on their own and it was just natural evolution anyways. (PS this is hyperbole)

I stand by the ISA statement. For it was AMD64 before it was x86-64. If it was to follow history it would of been x86, 286, 386...IA-32, IA-64 but it isn't. Intel caused this semantic nightmare with their use of IA-64 for the Itanium and previous monopolistic naming practices like Pentium.

If you look at the lists of ISAs every other architecture follows clear distinctions except x86. Which has this garbled mishmash of anti-competitive naming forks.
 
You're upset because Intel -- of all companies! -- used names for which they own trademarks how they saw fit?

Lol.

Not upset. That's history. AMD was born because the industry (cough cough IBM) refused allow a monopoly supplier of chips.

Since were talking feelings. Do you look back on the history of x86 and you wish all the legal battles intel waged against competitors were won by intel and they were the sole provider now? Because that's the alternative.
 
Not upset. That's history. AMD was born because the industry (cough cough IBM) refused allow a monopoly supplier of chips.
...and then they continued to copy Intel until Intel innovated a step too far.
Do you look back on the history of x86 and you wish all the legal battles intel waged against competitors were won by intel and they were the sole provider now. Because that's the alternative.
I'm not really interested in pursuing all of the possible ripple effects of a change here or change there in history. I don't expect companies to play 'fair', as fair is decided in courtrooms, not boardrooms. That's just how business, and humans themselves, work.

To that end, AMD has existed as a 'check' on Intel, but little more. Where AMD has gotten 'ahead' they've done so by being in the right place at the right time while Intel stumbled. As much as I've rooted for AMD in the past decades and as much as I recommend them now (by default), I'm pretty clear on what they do and don't bring to the table. Particularly since they're now fabless and have to take into account the direction of their fab partners instead of stearing their own fabs for their own architectures.

In a sense, that means that AMDs future manufacturing possibilities are more defined by Apple's priorities than their own over at TSMC. The effects of which can be previewed with AMD shipping second-gen "7nm" silicon that's just about as fast as Intel's aging 14nm architecture, and not particularly more efficient, not nearly as much as the process advantage might suggest.
 
In a sense, that means that AMDs future manufacturing possibilities are more defined by Apple's priorities than their own over at TSMC. The effects of which can be previewed with AMD shipping second-gen "7nm" silicon that's just about as fast as Intel's aging 14nm architecture, and not particularly more efficient, not nearly as much as the process advantage might suggest.

Well the thing is we don't know yet how Intel will perform at 7 nm. Just like their 10nm parts, they may lose a big chunk of their clock advantage they enjoy on a very very refined 14nm process and then be even further behind AMD who will be on 5nm+ by then. Personally I don't see Intel ever regaining a meaningful advantage over AMD, those days are long over. Their distinct foundry advantage became a huge disadvantage and now they are a vassal of TSMC as well. In fact, being fabless seems to be the way to go now since you mentioned Apple. Then look at NVIDIAs success, they're fabless and gaining traction with a bigger market cap than Intel so despite AMD going fabless because of financial circumstances, it seems to be the smart bet for the future. Not every high technology company really needs fabs since they can contract out now between Samsung and TSMC who will happily keep building more foundries to accommodate them.

IMO, Intel just looks like an old rotting ship riding on past success and old marketshare. That share can and will erode and it won't be decades like someone foolishly suggested earlier in this thread, technology leadership and marketshare can change very quickly. We're at an inflection point now where even the most stubborn IT administrator at large corporations won't be able to justify an Intel server despite all the Intel kickbacks.
 
Last edited:
Well the thing is we don't know yet how Intel will perform at 7 nm. Just like their 10nm parts, they may lose a big chunk of their clock advantage they enjoy on a very very refined 14nm process and then be even further behind AMD who will be on 5nm+ by then. Personally I don't see Intel ever regaining a meaningful advantage over AMD, those days are long over. Their distinct foundry advantage became a huge disadvantage and now they are a tributary of TSMC as well. In fact, being fabless seems to be the way to go now since you mentioned Apple. Then look at NVIDIAs success, they're fabless and gaining traction with a bigger market cap than Intel so despite AMD going fabless because of financial circumstances, it seems to be the smart bet for the future. Not every high technology company really needs fabs since they can contract out now between Samsung and TSMC who will happily keep building more foundries to accommodate them.
With Intel having more than 10x the annual revenue of AMD, money has a way of solving problems and challenges.
 
With Intel having more than 10x the annual revenue of AMD, money has a way of solving problems and challenges.

Well money hasn't solved Intel's foundry issues and that's why they keep bleeding executives.
 
Well money hasn't solved Intel's foundry issues and that's why they keep bleeding executives.
But it hasn't seemed to matter. While they have been playing executive musical chairs, dropping the ball, missing markets, letting a competitor catch back up, they are still making money hand over fist. It highlights how a monopoly with a cash cow can do mediocre work and often fail badly and still be insanely successful financially. It's one of those unsolved mysteries of the universe, like how Andrea Bocelli ties his bow tie without any help.
 
But it hasn't seemed to matter. While they have been playing executive musical chairs, dropping the ball, missing markets, letting a competitor catch back up, they are still making money hand over fist. It highlights how a monopoly with a cash cow can do mediocre work and often fail badly and still be insanely successful financially. It's one of those unsolved mysteries of the universe, like how Andrea Bocelli ties his bow tie without any help.

Well that’s why I mentioned them riding on old successes. I agree it’s difficult to break a monopoly but Intel is under attack by AMD and NVIDIA in the professional markets and they’re losing to both of them. While they haven’t ceded much marketshare yet, they will as I believe marketshare is a lagging indicator. With Intel losing massive valuation, their fallback is now cash on hand which isn’t an enviable position for a monopoly.
 
...and then they continued to copy Intel until Intel innovated a step too far.

They made their own product that was compatible with the industry. Did intel make the OS or software that ran on that OS? No. They were one player of many that tried to artificially corner the market. The governing bodies rightly ruled otherwise.

I'm not really interested in pursuing all of the possible ripple effects of a change here or change there in history. I don't expect companies to play 'fair', as fair is decided in courtrooms, not boardrooms. That's just how business, and humans themselves, work.

I'm glad you're not in charge of our laws. Bring back the robber barons!!! Is it too early to ask about bringing back slavery? How about indentured servants?

To that end, AMD has existed as a 'check' on Intel, but little more. Where AMD has gotten 'ahead' they've done so by being in the right place at the right time while Intel stumbled. As much as I've rooted for AMD in the past decades and as much as I recommend them now (by default), I'm pretty clear on what they do and don't bring to the table. Particularly since they're now fabless and have to take into account the direction of their fab partners instead of stearing their own fabs for their own architectures.

Yeah...that "little more" was enough for intel to use strong arm tactics to prevent competition. They went too far. Their activities were regulated and we're only now seeing a truly competitive market.

In a sense, that means that AMDs future manufacturing possibilities are more defined by Apple's priorities than their own over at TSMC. The effects of which can be previewed with AMD shipping second-gen "7nm" silicon that's just about as fast as Intel's aging 14nm architecture, and not particularly more efficient, not nearly as much as the process advantage might suggest.

Process and performance only goes so far. While not discounting them as a metric, you can also win contracts and collaborations by ethical business practices.

Take the console market. What was once Motorola became IBM and is now ARM. What was once intel, became IBM and is now AMD.

Apple while influential in the space is clearly not going to make inroads into other spaces. They have their niche and are poised well to exploit it. Thinking apple can sway long term investments in fabs is short sighted at best.

The more players using TSMC / Samsung / others the more will be invested in future technologies to drive the market.
 
Last edited:
You're upset because Intel -- of all companies! -- used names for which they own trademarks how they saw fit?

Lol.

Well it was done to confuse the market, on purpose. There was a time when there was actual competition in x86 CPUs. Intel instead of offering to sell licences and take control of x86.... which they could have easily done. Decided to trademark names, and purposely confuse things to make Cyrix, Via, AMD, Transmeta, Rise, National semi, IBM, Nextgen, NEC, RDC, SIS (I'm sure I'm forgetting a bunch) products look inferior or at least "unofficial".

It is actually pretty hilarious that 20 years later, that choice is going to be what kills Intel it seems. Intel was in a position to become ARM, and x86 everything... and also they would have been able to roll out things like Itanium. In the same way that ARM is about to invade HPC thanks to their addition of Scalable Vector Extension. That is an extension not a rewrite. Still Intel would have had the clout to force industry change if that is what they really wanted to do.

Intel made a choice (well a bunch of them) that ensured them a relatively short span of 20-25 years of dominance... at the expense of their longer term future imo. Instead of folding the worlds chip companies and wanna be chip companies into a x86 landscape controlled by them. They tried to lock everyone out. AMD forced them to go 64 bit cause they where able to beat them in court to hold a open licence. The x86 lockdown outside of AMD and VIA. Created the conditions for ARM to rise. They created a chip standard that all the other chip wanna bees could use. Intel locked things down when they could and made bank. Now the roosters are starting to roost. Every other chip manufacturer has zero love for Intel or x86... and most are actively working to kill it. Couple that with a modern Intel that seems to be missing more then there hitting... and going over on costs on almost everything. More and more it looks like x86 is just going to barely make it to 50.

On topic... I'm sure Nvidia would LOVE to get a hold of ARM. They got a nice $$$ deal out of Intel... but I'm sure they still feel like they where done dirty. lol I can't imagine regulators will be ok with any major licence holder buying ARM... but who knows. I am not sure Nvidia would be smart enough to leave ARM to operate as it has been. If they tried to Intel it and lock things down... or just stop designing licence able cores, the same fate that looks like its heading Intels way will eventually hit ARM.
 
Last edited:
Personally I don't see Intel ever regaining a meaningful advantage over AMD, those days are long over.
If Apple can pull more out of ARM... Intel can pull more out of x86.

Their distinct foundry advantage became a huge disadvantage and now they are a vassal of TSMC as well.
The foundry limitation is a weakness or a strength, with its merits judged only on how it enables architectures built on it. It's been both, for Intel, AMD back when they did that, and TSMC. More so for the latter two.
In fact, being fabless seems to be the way to go now since you mentioned Apple.
There's a pretty big difference in end-use here, not to mention volume. There's not a single 'way to go' that fits every end use.
That share can and will erode and it won't be decades like someone foolishly suggested earlier in this thread, technology leadership and marketshare can change very quickly. We're at an inflection point now where even the most stubborn IT administrator at large corporations won't be able to justify an Intel server
At the very worst, they'd be able to justify an Intel server because that's what's actually available to buy. There's an order of magnitude of difference between the volume that these companies produce.
But it hasn't seemed to matter. While they have been playing executive musical chairs, dropping the ball, missing markets, letting a competitor catch back up, they are still making money hand over fist.
How did Chrysler last as long as they did...

Intel produces much higher quality parts than Chrysler ever did, or is likely to ever do. Chrysler survived because people needed vehicles, and Intel survives, thrives even, because they make excellent products.
for a monopoly
This is also extremely silly. They're not even close.
 
If Apple can pull more out of ARM... Intel can pull more out of x86.

I really won't be so sure on that. x86 is pretty much what it is... any pulling out there is to do is via extensions. Which require software to unlock.

Apple is not really pulling anything much out of ARM. They design their own cores sure and they are slightly better then the ARM designed cores. However what makes them superior really is all the little bits they can bolt on. Owning the complete Apple software ecosystem as they do allows them to bolt new things on and implement them almost instantly.

Intel has no such full through integration from hardware down. I mean they can get the software industry to perhaps implement AVX or whatever else they want to try and implement or bolt on... but they can't tell software developers going forward ALL Intel using devices will have X or Y feature. Heck they are not even continuing AVX 512 on new products. As Linus said they need to focus on their core... and trying to add bits like Apple is doing is honestly not going to likely go well. Intel missed the boat... they have been talking about working with partners to custom design SOC. But so far nothing of note has come from all that talk (its been 5-6 years at least now they have been talking about 3D chip stacking and offering partner the ability to add AI and the like to their designs).

Its looking more and more like Apple is going to lay some serious hurt on x86. x86 is going to go back to being that dump looking Bill gates clone, Apple can make fun of. I really really hope they hire Justin long to do a few new commercials when they launch the ARM powered Mac Books. lol
 
They made their own product that was compatible with the industry.
Prior to the Athlon? Sorry :ROFLMAO:
I'm glad you're not in charge of our laws. Bring back the robber barons!!! Is it too early to ask about bringing back slavery? How about indentured servants?
You can take this leap into trolltown on your own.
Yeah...that "little more" was enough for intel to use strong arm tactics to prevent competition. They went too far. Their activities were regulated and were are only now seeing a truly competitive market.
Intel offered their customers a better deal. That's the only fact we have; the rest is opinion, and there are a lot of those.
Process and performance only goes so far. While not discounting them as a metric, you can also win contracts and collaborations by ethical business practices.
If you don't have a product, you don't get business. See: AMD for most of their existence.
Take the console market. What was once Motorola became IBM and is now ARM. What was once intel, became IBM and is now AMD.
Talk about generalizing to distort fact :D

ARM still isn't proven for more than contemporary mobile gaming, in any iteration, and AMD won the last console round by being the only company to have both a passable CPU and a passable GPU, and of course, being so poor that they couldn't say no.
Apple while influential in the space is clearly not going to make inroads into other spaces...Thinking apple can sway long term investments in fabs is short sighted at best.
They're TSMCs biggest customer. Are you going to argue that TSMC would choose to optimize their technology for AMD over Apple?
They have their niche and are poised well to exploit it.
They make some mighty fine computing appliances. I'm even considering them more with their move to ARM for laptops, depending on how this next round of Windows ultrabooks goes. But that's a separate discussion...
The more all players using TSMC / Samsung / others the more will be invested in future technologies to drive the market.
I really can't agree more. Or at least, agree right now. There's a catch involved, and it's one that AMD has already fealt: semiconductor development is insanely expensive. Intel has been stuck on the same process for what... seven years now? And their CPUs are still competitive? And there's only one other company in the world (TSMC) that produces products that can even hope to compete?

Maybe ARM or even RISC-V will change that eventually, especially if foundry progress continues to grind to a halt. The only caution is the same one we have with regard to Intel: if it's down to just Samsung and TSMC for semiconductor advancements (disregarding Chinese fabs), wouldn't we wind up with the same problem?
Well it was done to confuse the market, on purpose. There was a time when there was actual competition in x86 CPUs. Intel instead of offering to sell licences and take control of x86.... which they could have easily done. Decided to trademark names, and purposely confuse things to make Cyrix, Via, AMD, Transmeta, Rise, National semi, IBM, Nextgen, NEC, RDC, SIS (I'm sure I'm forgetting a bunch) products look inferior or at least "unofficial".
Having used pre-Athlon x86 other than Intel, as an end user, those products were inferior. IA32 and the Pentium Pro (P6) line proved superior literally up to the release of Zen, which is competitive when handed a stack of advantages.
It is actually pretty hilarious that 20 years later, that choice is going to be what kills Intel it seems. Intel was in a position to become ARM, and x86 everything...
Given the stack of advantages needed to make Zen competitive -- advantages not developed by AMD themselves -- I don't see Intel as being 'killed'.

I do however see Intel's vision of x86 as not being as universal as it perhaps could be. While Intel has wrung progressively ever more performance out of x86, most of that in the last decade and a half or so has been from using extensions. Intel's efforts to shrink x86 didn't go terribly well (the in-order Atom CPUs which are all just Skylake now), and there are limits to scaling it up as well.
On topic... I'm sure Nvidia would LOVE to get a hold of ARM. They got a nice $$$ deal out of Intel... but I'm sure they still feel like they where done dirty. lol I can't imagine regulators will be ok with any major licence holder buying ARM... but who knows. I am not sure Nvidia would be smart enough to leave ARM to operate as it has been. If they tried to Intel it and lock things down... or just stop designing licence able cores, the same fate that looks like its heading Intels way will eventually hit ARM.
Honestly I don't see Nvidia actually having interest in buying ARM. Or Apple for that matter. Neither have a need to steer the overall market, nor are they really interested in stifling innovation; Apple is the ARM innovator, whilst Nvidia doesn't really need to push boundaries with ARM.

And I really don't see ARM as being a panacea for CPUs outside of not being Intel. I expect ARM to be what innovators use to generalize their software solutions into cross-platform frameworks, something already more or less accomplished for the web and most consumer-facing code, but that leap means that there's room for RISC-V.

Going forward, I see CPU technology more or less self-categorizing between various levels of performance for branch processing alongside various levels of vector performance and specialization, with x86 CPUs as we have them today being examples of mostly branch-processing focused and GPUs today being highly-specialized vector processors, and Fujitsu's ARM supercomputer solution as well as Intel's Xeon Phi as well as various ARM SoCs with emphases in graphics processing or inference being examples of something between.
 
Given the stack of advantages needed to make Zen competitive -- advantages not developed by AMD themselves -- I don't see Intel as being 'killed'.

I do however see Intel's vision of x86 as not being as universal as it perhaps could be. While Intel has wrung progressively ever more performance out of x86, most of that in the last decade and a half or so has been from using extensions. Intel's efforts to shrink x86 didn't go terribly well (the in-order Atom CPUs which are all just Skylake now), and there are limits to scaling it up as well.

Honestly I don't see Nvidia actually having interest in buying ARM. Or Apple for that matter. Neither have a need to steer the overall market, nor are they really interested in stifling innovation; Apple is the ARM innovator, whilst Nvidia doesn't really need to push boundaries with ARM.

And I really don't see ARM as being a panacea for CPUs outside of not being Intel. I expect ARM to be what innovators use to generalize their software solutions into cross-platform frameworks, something already more or less accomplished for the web and most consumer-facing code, but that leap means that there's room for RISC-V.

Going forward, I see CPU technology more or less self-categorizing between various levels of performance for branch processing alongside various levels of vector performance and specialization, with x86 CPUs as we have them today being examples of mostly branch-processing focused and GPUs today being highly-specialized vector processors, and Fujitsu's ARM supercomputer solution as well as Intel's Xeon Phi as well as various ARM SoCs with emphases in graphics processing or inference being examples of something between.

When I say x86 is going to die... I'm not exempting AMD. They are going to be in the same boat. x86 is just loosing relevance. ARM already owns the largest personal computing market. (ya its not desktop or laptops) like it or not PC today should mean pocket computer. Mobile is the main market at this point. And Intel has no stakes in that market at all as we know. By giving up that market they have lost all the rest. Laptops are next... and I have almost no doubt at this point Apple is going to turn windows based laptops into also ran junk. High end windows laptops will evaporate from the market. No one is going to want to make 2 thousand dollar windows laptops that loose in every single bench mark to a thousand dollar Apple arm macbook. I don't know if the first desktop macs will be as attractive... however if Apple gets all the content companies like Adobe to tailor their software packages for them. Its very possible Apple grows there as well. By the time we get their I suspect Microsoft will take another swing at ARM windows. There is also a very real possibility that Google decides to go after windows for real with ChromeOS. I know right now that seems insane... but if Apple swings some market share Google will follow rumors of their "white chaple" google 5nm CPU have been swirling since before Apple announced they where dumping Intel. I could well see Google powering their Pixel 6 with white chapel... releasing some mid range chrome books with the same chip, and a year later releasing a more powerful version to really compete with Apples macbooks. Intel is going to get cut out of the laptop market... and when that happens the x86 desktop will be on life support.

I agree with you on NV... I can't imagine they are really interested in ARM for real. I don't doubt they inquired about what it would cost them. I just don't see them wanting to go that way. We agree on that. I really hope softbank just sits on ARM, and finds another way to raise a bit of cash. ARM being owned by people not looking to do anything but collect dividends is good for the industry. lol

As for RISC-V ya it was always dead in the water cause ARM already fills that void. There is no incentive for anyone to use RISC-V. Arm licences are reasonable... and you don't have to do any design work yourself. RISC-V is cool in theory until you realize the only cores you could just drop into a product are University level student project affairs that would never compete with anything. So your forced to design your own which can and almost always does cost billions... and then you would be the only one with a modern RISC-V chip. :) I guess if NV or Apple or Samsung or the like bought ARM passed regulation and stopped licencing perhaps... RISC-V gets some traction. But really even if that happened all the major players already have their licences and no change of ARM ownership could end that.

As for x86 in HPC as you mentioned it... I'm not so sure x86 has a long future in that field. Yes the US gov has purchased a couple upcoming Intel powered machines that should re take the performance lead. However I think the last few years have shown for that market x86 isn't really ideal unless your willing to install just as many GPUs from NV or AMD. (Power has the same issue) Fujitsu has shown that isn't needed for those work loads if you design your main CPU for the task. (which is what ARM SOC designs are good at) fugaku is a general compute machine that also does all the vector stuff GPUs generally do. China sort of proved the same thing to be true with their sunway stuff.... but that's a different can of worms. HPC field is getting interesting, will be interesting to see how many Fujitsu powered machines Cray manages to sell this year.
 
Prior to the Athlon? Sorry :ROFLMAO:

amd dx2 66. thunderbird, K6 III, just a few that had stints.

You can take this leap into trolltown on your own.

Not trolling, just pointing out what humans do without rules.

Intel offered their customers a better deal. That's the only fact we have; the rest is opinion, and there are a lot of those.

Judgements are more than opinions. But please try to rewrite history to get your fanboy on.

If you don't have a product, you don't get business. See: AMD for most of their existence.

now who's trolling.

Talk about generalizing to distort fact :D

That's just what happened. The coolest thing is to see IBM in the middle of all those transitions.

ARM still isn't proven for more than contemporary mobile gaming, in any iteration, and AMD won the last console round by being the only company to have both a passable CPU and a passable GPU, and of course, being so poor that they couldn't say no.

Are you sure you don't want to switch your statement here?

They're TSMCs biggest customer. Are you going to argue that TSMC would choose to optimize their technology for AMD over Apple?

I think they optimize their technology to the greatest common denominator. Some bulk some custom. As apple moves to higher end chips they are actually following the roads paved by nVidia, AMD, and others.

They make some mighty fine computing appliances. I'm even considering them more with their move to ARM for laptops, depending on how this next round of Windows ultrabooks goes. But that's a separate discussion...

and have yet to make a high end product.

I really can't agree more. Or at least, agree right now. There's a catch involved, and it's one that AMD has already fealt: semiconductor development is insanely expensive. Intel has been stuck on the same process for what... seven years now? And their CPUs are still competitive? And there's only one other company in the world (TSMC) that produces products that can even hope to compete?

Maybe ARM or even RISC-V will change that eventually, especially if foundry progress continues to grind to a halt. The only caution is the same one we have with regard to Intel: if it's down to just Samsung and TSMC for semiconductor advancements (disregarding Chinese fabs), wouldn't we wind up with the same problem?

Having used pre-Athlon x86 other than Intel, as an end user, those products were inferior. IA32 and the Pentium Pro (P6) line proved superior literally up to the release of Zen, which is competitive when handed a stack of advantages.

Again so superior strong arm tactics were used.

BTW zen 1 was at Global Foundries and was at the very least competitive.
 
Last edited:
Having used pre-Athlon x86 other than Intel, as an end user, those products were inferior.
Depends how far back you go.
The NEC V20, V30, and V33 were all superior by 20-30% clock-for-clock in hardware compared to Intel/AMD 8088, 8086, and 80286 CPUs, respectively.
I mean, it was x86-16, and the early 1980s, but still, is totally relevant nearly 40 years later! :D

IA32 and the Pentium Pro (P6) line proved superior literally up to the release of Zen, which is competitive when handed a stack of advantages.
The Athlon 64/X2 destroyed every variant of Netburst in nearly all real-world usage, and it wasn't until the Core, and primarily Core 2 (Conroe), CPUs that Intel became competitive again, and stayed on top until now.
I think what you said from another thread, that Intel is Intel's biggest problem, especially at this point, really rings true.
 
I don't think AMD moving to 7nm caught Intel off guard, it's not like they don't have insider industry sources. What happened was Intel grossly overestimating their ability to deliver 10nm/7nm at the density they needed and falling behind. They didn't have a backup plan at all, it was all or nothing which points to massive incompetence over there. AMD on the other hand was working on fumes and managed to release Zen, which like you say was nothing special for the first iteration, but improved it enough with subsequent releases on a fast cadence that they now have Intel beat in IPC and with the new Zen coming out, probably across the board even with clock speed taken into account.

AMD planed using Glofo 7nm, until Glofo cancelled all 7nm development, and then AMD moved to TSMC. AMD would be in the same situation than Intel if it had owned the foundry.

X86 is in a unique position in that AMD and Intel went in different directions for their x86 CPU's. This is why AMD CPU's aren't effected by Meltdown while Intel is. Also why some ARM based CPU's are effected by Spectre, including Apple's ARM based SoC's. It should also be mentioned that Intel has never given AMD any of Intel's designs which they were suppose to but Intel changed the deal. As a result AMD had to innovate which is why modern x86 CPU's have built in memory controllers, 64-bit, and at some point 3DNow technology. 3DNow is dead but it started the whole extensions thing in x86.

There are much more variety of designs in the ARM world than in the x86 world. In ARM one finds vanilla ARM cores, semicustom ARM cores and fully custom cores. There is a broad choice of vanilla cores as well. Just look at ARM catalogue: you can find from cheap single-issue inorder 32bit cores without cache and FPU, to wide OoO server-class 64bit cores with SIMD units. AMD has Zen and... Zen.

AMD had to innovate because the Court prohibited them to continue using illegal reverse engineering techniques to clone Intel designs. Their 'innovation' consisted in purchasing somewhere else design (e.g. NexGen).

3DNow didn't start the whole extensions thing in x86. Have you ever heard about x87 or MMX?
 
Last edited:
But what if Nvidia needs the engineers to do this and the best ones work for ARM? Or better yet, develop the architecture and license it out through ARM. There are so many directions Nvidia can go with purchasing ARM, and they all can be profitable.

Get the engineers then, as everyone else does: Apple, Nuvia, Ampere... You don't need to purchase the whole ARM company.

In what way would Nvidia develop the architecture? What are those many directions?

I'd love some Nvidia x86 CPUs, but I reckon the company doesn't see x86 as part of the future computing landscape (supercomputing clusters in the magical cloud, serving thin and mobile clients)

Sure

nvidia_arm_versus_x86_shipments.jpg

https://www.theregister.com/2013/06/18/nvidia_cuda_arm_openacc/

It's a new ISA. You can't just magically declare MOAR. You have to work out how all the instructions and operating modes coexist. I have the books they released back then and it's 5 volumes. It was so thorough ARM basically used the same model in their hybrid 64bit/32bit ISA.

x84-64 is an mere extension of x86-32. You cannot do a 64bit x86 core without supporting back all the legacy x86 stuff until 8bit. And this is a reason why x86 cores are so expensive, big and inefficient.

ARM64 is a new ISA and you can do a 64bit ARM core without supporting any of the legacy 32bit and 16bit ARM stuff. Mobile chips still have to support 32bit for legacy applications but server/HPC cores do not. E.g. ThunderX2 only supports AArch64 execution state. There is no AArch32 support.
 
We just selected our new standard CPU for our virtual machine POD's.
The choice was Intel (We don't want to run mixed AMD/Intel enviroment, it is too much of a hassle and we have seen AMD rise and fall to many times).
That mean that for as long as Intel keep making the CPU in mention, we will be using it.
When that CPU becomes EOL, we will look at our new standard CPU.

It might be a choice between Intel and AMD again....ARM....not a chance.
A lot of people posting here seem only to know the consumer market and stuff they can read on Wiki...which makes their posts funny to read.
 
We just selected our new standard CPU for our virtual machine POD's.
The choice was Intel (We don't want to run mixed AMD/Intel enviroment, it is too much of a hassle and we have seen AMD rise and fall to many times).
That mean that for as long as Intel keep making the CPU in mention, we will be using it.
When that CPU becomes EOL, we will look at our new standard CPU.

It might be a choice between Intel and AMD again....ARM....not a chance.
A lot of people posting here seem only to know the consumer market and stuff they can read on Wiki...which makes their posts funny to read.

Intel is not a choice for many customers. Some migrated to AMD others have migrated to ARM.
 
Intel is not a choice for many customers. Some migrated to AMD others have migrated to ARM.

I would say that is not a majority, but a minority.
Spoke to a few former co-workers in other companies...none of them are going AMD yet...and they laughed at the notion of ARM.
Enterprise is not consumer space, people need to learn the difference.
 
When I say x86 is going to die... I'm not exempting AMD. They are going to be in the same boat. x86 is just loosing relevance
amd dx2 66. thunderbird, K6 III, just a few that had stints.
Depending on how many hairs you split... 486s were still Intel copies, and K6 III was still a poor-man's Pentium / Pentium II. I had both...
The coolest thing is to see IBM in the middle of all those transitions.
I look at IBM and feel a bit sad; they still produce their Power ISA for big iron, and it wasn't bad, but it felt like IBM had simply stepped into a ring they were ill-prepared to endure. And part of that is feeling that we'll likely never get to see a return of the Power ISA to personal computing. My bet is that it could be done, but simply wouldn't make sense to do in place of ARM or MIPS or RISC-V and so on.
Are you sure you don't want to switch your statement here?
Nope, that's accurate.
I think they optimize their technology to the greatest common denominator. Some bulk some custom. As apple moves to higher end chips they are actually following the roads paved by nVidia, AMD, and others.
Apple is riding the efficiency curve; neither AMD (yet) nor Nvidia have to do so to the level that Apple must. Battery life is king for their customers.

Conversely, GPUs don't clock much higher than 2GHz, and AMDs CPUs are stuck below 4.5GHz.
The Athlon 64/X2 destroyed every variant of Netburst in nearly all real-world usage, and it wasn't until the Core, and primarily Core 2 (Conroe), CPUs that Intel became competitive again, and stayed on top until now.
Core came from the P6 line :)
 
The best AMD motherboards come with Intel networking.
Yea and I'm using Intel's networking adapter in my AMD machine, but that's nothing special. It was either Intel or Atheros with Intel being the cheapest option. That's not innovative just that Intel makes really good network adapters. There was a time period that Nvidia made the best NIC but they stopped making motherboard chipsets because Intel don't want them to.
The Core 2, which used an older architecture as a base, was faster than the competing AMD parts -- despite not having the memory controller on die.
The Athlon 64 was released in 2003 while Core2Duo was released in 2006. While Core2Duo was faster, it has a lot to do with AMD using the Athlon 64 architecture for too many years, much like Intel tweaking Sandy Bridge to this day. The Phenom and Phenom II's were just Athlon 64's with tweaks and more cores.

AMD doubled the register size to 64bit, doubled the number of registers, and that was that. What's special is that Microsoft supported AMD64; Intel could have extended x86 to 64bit at any time.
Intel abandoned the idea of x86 going 64-bit and that's why they created IA64. AMD's x64 implementation was so good that Intel had no choice but to abandon IA64 for X64, or also known as AMD64.
Intel has had half a dozen upgrades in the time that AMD had literally none ;)
You do know people were able to get some CPU's working on chipsets that it has no business working on right? Half a dozen upgrades sure sounds like a great way to punish people for wanting to upgrade the CPU.
 
That's not innovative just that Intel makes really good network adapters.
Intel is still innovating with ethernet adapters. See their 400Gbit efforts. At the same time, their adapters for 'regular' computers have the best performance and the best driver support.
The Athlon 64 was released in 2003 while Core2Duo was released in 2006. While Core2Duo was faster, it has a lot to do with AMD using the Athlon 64 architecture for too many years, much like Intel tweaking Sandy Bridge to this day. The Phenom and Phenom II's were just Athlon 64's with tweaks and more cores.
The point is that the follow-on to the Pentium III architecture was Tualatan, which was pretty widely sought after and pretty much unavailable. It was also faster than the Athlons at the time. Later, it was evolved into the Core architecture, where it was again faster than the Athlons (Phenoms) at the time.
Intel abandoned the idea of x86 going 64-bit and that's why they created IA64. AMD's x64 implementation was so good that Intel had no choice but to abandon IA64 for X64, or also known as AMD64.
I don't see AMDs x86-64 extension to be really any better or worse than what Intel would have done, should they have chosed to do so, versus doing IA64 instead. That's kind of the crux of a lot of the perspectives presented here, and what really pushed Intel to move to x86-64 was Microsoft eventually adopting it; and as much as it is claimed that it's some innovation, Intel was able to tweak their existing Netburst architecture to support x86-64. That's how little difference there is, and to AMDs benefit, their innovation was pushing x86 to 64bit while changing so little that uptake was straightforward. But again, it was Microsoft's decision to support their extension that made the difference.
You do know people were able to get some CPU's working on chipsets that it has no business working on right? Half a dozen upgrades sure sounds like a great way to punish people for wanting to upgrade the CPU.
There's working, and then there's stable, and then there's stable across a broad cross-section of the product stack. A few got stuff working, a couple got it stable -- but in general, these were exceptions that wouldn't hold up to a broader imlementation.
 
(snip arm infatuation)

x84-64 is an mere extension of x86-32. You cannot do a 64bit x86 core without supporting back all the legacy x86 stuff until 8bit. And this is a reason why x86 cores are so expensive, big and inefficient.

Not even close. If you really understood how processors work you would realize it's a whole lot more than just an extension. I know you guys like to try and dumb it down but if you really took the time to see how innovative it was, you wouldn't be spouting this nonsense. (see below)

ARM64 is a new ISA and you can do a 64bit ARM core without supporting any of the legacy 32bit and 16bit ARM stuff. Mobile chips still have to support 32bit for legacy applications but server/HPC cores do not. E.g. ThunderX2 only supports AArch64 execution state. There is no AArch32 support.

As with AMD64 the key feature you gloss over is that AArch32 and AArch64 can coexist on the same processor at the same time. (not just at boot up time) The physical hardware can be mapped between both spaces. Who'd a *THUNKed it.

As for you espousing that ThunderX2 only supports AArch64, that's an implementation. You can fuse off or restrict what you want. The ISA still allows for backwards compatibility with A32 code and a 64bit hyper-visor controls everything.

To intel's credit they did make virtual x86 mode, but this was more akin to a virtual machine than a shared execution space and instruction set. The hypervisor simply provides the function of disk, network, display, etc.

*Thunking is the method 64bit drivers use to map to 32 bit space.
 
Last edited:
Nope, that's accurate.

Switching back to reality

Apple is riding the efficiency curve; neither AMD (yet) nor Nvidia have to do so to the level that Apple must. Battery life is king for their customers.

The point was Apple is still designing for low power. I look forward to their scaling things up.

Conversely, GPUs don't clock much higher than 2GHz, and AMDs CPUs are stuck below 4.5GHz.

and apple can barely reach 3ghz. This has very little to do with fab and more to do with choices of cache and pipeline. A1x has a 1 or 2 latency L2. When you see chips in the 3-4ghz range this will probably have to change.
 
Not even close. If you really understood how processors work you would realize it's a whole lot more than just an extension. I know you guys like to try and dumb it down but if you really took the time to see how innovative it was, you wouldn't be spouting this nonsense. (see below)

Interesting, because the AMD engineers that designed x86-64 refer to their own creation as a "straightforward extension for 64 bits".

As with AMD64 the key feature you gloss over is that AArch32 and AArch64 can coexist on the same processor at the same time. (not just at boot up time) The physical hardware can be mapped between both spaces. Who'd a *THUNKed it.

As for you espousing that ThunderX2 only supports AArch64, that's an implementation. You can fuse off or restrict what you want. The ISA still allows for backwards compatibility with A32 code and a 64bit hyper-visor controls everything.

Of course ARMv8 is backward compatible with ARMv7. My point was other. The point is that ARM64 is a new ISA separated from ARM32/16. Unlike what happens in the x86 world, ARM32/16 isn't a subset of ARM64.

Thanks to this separation of the new ISA from former ISAs, engineers have the option to implement one or more ISAs in the same core. Mobile ARM chips currently implement both ARM64 and ARM32/16 for software legacy reasons (future mobile cores could implement only 64 and drop 32/16 support when software evolves). ARM server chips do not have the problem of legacy software, because ARM servers are new. So server cores as the TX family can implement only the 64bit ISA, reducing costs, complexity, power, and area:

"We have no x86 legacy, like 32-bit support and things like that,” said Hegde. “We are able to optimize our code, and our core area is significantly smaller [as a result]. Just to give you an idea, in the previous generation, if you look at ThunderX2, compared to AMD or Skylake, for the same process node technology [we get] roughly 20% to 25% smaller die area. That translates into lower power. When we move to 7nm with ThunderX3, our core compared to AMD Rome’s 7nm is roughly 30% smaller.”

https://www.hpcwire.com/2020/03/17/marvell-talks-up-thunderx3-and-arm-server-roadmap/
 
Can Nvidia even afford arm? Or even enter a bidding war for it? Sounds really dubious to me, unless if Nvidia was part of (or leading) a coalition to acquire arm.

Well looks like this is gonna cost an Arm & a Leg for nVidia

cc erek Schro

https://www.thefpsreview.com/2020/07/31/nvidia-is-now-in-advanced-talks-to-buy-arm/

If NVIDIA does follow through (and the regulators allow the company to get away with it), the price of the deal should be quite mind blowing. According to New Street Research LLP, Arm is currently worth $44 billion – a figure that’s expected to rise by $24 billion ($68 billion) by 2025.

“A deal for Arm could be the largest ever in the semiconductor industry, which has been consolidating in recent years as companies seek to diversify and add scale,”
~ Bloomberg has followed up on its initial report with a new article that suggests green team is seriously considering an acquisition. In fact, a deal could be announced very soon
 
  • Like
Reactions: erek
like this
Well looks like this is gonna cost an Arm & a Leg for nVidia

cc erek Schro

https://www.thefpsreview.com/2020/07/31/nvidia-is-now-in-advanced-talks-to-buy-arm/

If NVIDIA does follow through (and the regulators allow the company to get away with it), the price of the deal should be quite mind blowing. According to New Street Research LLP, Arm is currently worth $44 billion – a figure that’s expected to rise by $24 billion ($68 billion) by 2025.


~ Bloomberg has followed up on its initial report with a new article that suggests green team is seriously considering an acquisition. In fact, a deal could be announced very soon

Thanks! was looking to post that update, but got sucked back into work
 
Jensen Huang:
"The state of ARM and it's licensing model now is a bit like putting out a bowl of candy on Halloween with the sign 'Help yourself but please take only one!' -- you know the first couple kids will just take all the candy. Well I think there's a better way to pass out the candy - because, let's face it, kids are scum".

unnamed (1).jpg
 
Last edited:
Interesting, because the AMD engineers that designed x86-64 refer to their own creation as a "straightforward extension for 64 bits".

Being straightforward does not invalidate the underlying complexity.

It's still more of a change than AArch32 to AArch64 and probably the most complex change to any ISA to date.

Of course ARMv8 is backward compatible with ARMv7. My point was other. The point is that ARM64 is a new ISA separated from ARM32/16. Unlike what happens in the x86 world, ARM32/16 isn't a subset of ARM64.

Thanks to this separation of the new ISA from former ISAs, engineers have the option to implement one or more ISAs in the same core. Mobile ARM chips currently implement both ARM64 and ARM32/16 for software legacy reasons (future mobile cores could implement only 64 and drop 32/16 support when software evolves). ARM server chips do not have the problem of legacy software, because ARM servers are new. So server cores as the TX family can implement only the 64bit ISA, reducing costs, complexity, power, and area:
"We have no x86 legacy, like 32-bit support and things like that,” said Hegde. “We are able to optimize our code, and our core area is significantly smaller [as a result]. Just to give you an idea, in the previous generation, if you look at ThunderX2, compared to AMD or Skylake, for the same process node technology [we get] roughly 20% to 25% smaller die area. That translates into lower power. When we move to 7nm with ThunderX3, our core compared to AMD Rome’s 7nm is roughly 30% smaller.”

https://www.hpcwire.com/2020/03/17/marvell-talks-up-thunderx3-and-arm-server-roadmap/

Funny how they compare the delta from x86 -> Thunder and not the change from a standard ARM64 -> Thunder because I doubt that delta would be much.

By definition RISC is going to be much smaller. Without going full on in a RISC CISC debate, the abstraction of memory from operator makes instructions much simpler. The reverse you may need more instructions to get the same work done. Although there are cases in loops where the reverse is true.

In theory there should be some cases were 32bit is faster than 64bit, (i.e. recursion) but in practice, at least in x86, that has generally not been the case.

As for the area needed for legacy modes, I highly doubt their removal would amount to much. You still have to be able to operate on byte, word, dword, qword, etc. and an extended register is backwards compatible with the lesser registers address space as long as you have methods to extend them. Having more than one virtual addressing mode most certainly adds a bit, but I doubt that much.

If you really want to make this point. Show us a large delta between two ARM dies, one pure 64 one not. Then this may carry some weight.
 
Last edited:
Well looks like this is gonna cost an Arm & a Leg for nVidia

cc erek Schro

https://www.thefpsreview.com/2020/07/31/nvidia-is-now-in-advanced-talks-to-buy-arm/

If NVIDIA does follow through (and the regulators allow the company to get away with it), the price of the deal should be quite mind blowing. According to New Street Research LLP, Arm is currently worth $44 billion – a figure that’s expected to rise by $24 billion ($68 billion) by 2025.


~ Bloomberg has followed up on its initial report with a new article that suggests green team is seriously considering an acquisition. In fact, a deal could be announced very soon

As nice as that would be to see, I don’t think nvidia will get ARM. Softbank is definitely ratcheting up interest before they go public with ARM.
 
X86-64 is the reason we are stuck with so much legacy crap on x86.
IA64 would have gottenrid of a lot of legeacy
What are virtual machine pods?

A rack filled with physcial hosts, switches and SAN for running virtual machine clusters.
 
A rack filled with physcial hosts, switches and SAN for running virtual machine clusters.

Interesting. Is this something proprietary? Would you happen to have any links that describe this tech? Genuinely curious
 
Status
Not open for further replies.
Back
Top