AI Accelerator Cards in Desktops?

psy81

Gawd
Joined
Feb 18, 2011
Messages
605
Based on recent developments and Microsoft's move to integrate AI into Windows it looks like in the near future we will have AI Accelerator Cards in our PCs or not, who knows. Maybe Nvidia will just buy up these startups and integrate this into their GPUs.

What do you guys think of this? Personally, this doesn't excite me but who know what this may bring.

I would be annoyed if Microsoft required these cards to run Windows.


View: https://youtu.be/q0l7eaK-4po?si=QHdBKS3RdQ6rw4PK

https://www.pcworld.com/article/219...or-cards-from-memryx-kinara-debut-at-ces.html
 
and these.....


1705633485354.jpeg

https://www.amazon.com/youyeetoo-Ac...=1705633445&sprefix=coral+tpu,aps,147&sr=8-15
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Yeah, unless they force it on my by integrating into the motherboard or CPU, I'll pass.

I want none of this shit.

I've made it to 2024 without using anything AI, I plan on making it until my deathbed without ever doing so.

Fuck all things AI. Fuck it long. Fuck it hard.
 
Will these just become obsolete with the "AI" stuff Intel and AMD are putting into their CPUs?
 
  • Like
Reactions: psy81
like this
Already exists in production products below...

https://coral.ai/products/#prototyping-products

Those are not really for your typical x86 computers we all use. But fair enough

Yeah, unless they force it on my by integrating into the motherboard or CPU, I'll pass.

I want none of this shit.

I've made it to 2024 without using anything AI, I plan on making it until my deathbed without ever doing so.

Fuck all things AI. Fuck it long. Fuck it hard.

Yeah, to late. Pandora's box on that one opened years ago and closing it is never going to happen. Your gonna have to go luddite to avoid now.
 
My impression is that the standalone accelerators are more datacenter / development products.

Desktop PCs will be going the way of integrated "AI" accelerators but it's gonna be integrated into other hardware.
RTX 2000+ already have Tensor cores, RX 7000 have Matrix cores, and Arc has XMX cores- those already fulfill any "AI" acceleration requirements.

For systems without dGPU, Ryzen 8000 has some sort of AI accelerators (as will Ryzen 9000 presumably) and I would guess that future Intel CPUs with Arc iGPU will start including XMX cores.

So AI accelerators on desktop = yes, but they're either already included in current hardware or will likely be added soon.
 
Yeah, unless they force it on my by integrating into the motherboard or CPU, I'll pass.

I want none of this shit.

I've made it to 2024 without using anything AI, I plan on making it until my deathbed without ever doing so.

Fuck all things AI. Fuck it long. Fuck it hard.
Gonna be an issue when some college kid comes in that coasted though school on AI comes and rocks with the best AI answers.
 
Gonna be an issue when some college kid comes in that coasted though school on AI comes and rocks with the best AI answers.
...until they are given the job, and 5 years later buildings and bridges start collapsing and planes start falling out of the sky.

The problem with AI is the misplaced trust. It will get the little things right most of the time, and will lull people into a false sense of confidence, and over time reviews of the output will be more and more cursory as misplaced trust gets built up.

Just like Teslas broadsiding 18 wheelers in broad daylight because people trusted them too much and started taking naps and reading the paper while "driving", AI induced bad failures are almost guaranteed.

You simply cant trust that shit.
 
AI integration has been a common part of what makes 90% of the mobile platforms a thing. iOS and Android would be terrible to use without it, it’s been upscaling and enhancing the pictures and video playback there for 10 years or more. Autocorrect, predictive text, audio filtering for input and output, spacial audio, signal noise reduction for wifi and cellular signals. It’s a long list, you can get into some fun semantics on what there is Machine Learning and which is AI but what ever you want to call them those accelerators have been in consumer goods for 10 years or more and the fact it’s only now coming as a mainstay to windows shows how much Microsoft and Intel dropped the ball on this.
 
...until they are given the job, and 5 years later buildings and bridges start collapsing and planes start falling out of the sky.

The problem with AI is the misplaced trust. It will get the little things right most of the time, and will lull people into a false sense of confidence, and over time reviews of the output will be more and more cursory as misplaced trust gets built up.

Just like Teslas broadsiding 18 wheelers in broad daylight because people trusted them too much and started taking naps and reading the paper while "driving", AI induced bad failures are almost guaranteed.

You simply cant trust that shit.
What’s that saying about familiarity and complacency?
 
For systems without dGPU, Ryzen 8000 has some sort of AI accelerators (as will Ryzen 9000 presumably) and I would guess that future Intel CPUs with Arc iGPU will start including XMX cores.
They will have integrated Neural Processing Units (NPU) in the CPUs and SoCs.

AMD-Ryzen-AI-roadmap-clean.png


AMD-Ryzen-8000-Hawk-Point-APUs-2.png


So AI accelerators on desktop = yes, but they're either already included in current hardware or will likely be added soon.
Ironically enough, NPUs have been integrated into smartphones starting with the Apple iPhone 7 and the Samsung Galaxy S6 circa 2016.
This technology is old hat for smartphone platforms and software stacks, but for desktops and laptops it will have its benefits for doing searches, Teams/Zoom meeting filters, etc.

Yeah, unless they force it on my by integrating into the motherboard or CPU, I'll pass.
While I can understand your sentiment there are far too many profits to be made from AI, and it has opened so many doors for individuals, corporations, governments, etc.
For example, this is just with AMD's NPU offerings:

AMD-Ryzen-AI-clean-1.png


However, there is a dark side of AI (see video below) - I won't go into politics, CBDC, user tracking, sentient AI, etc. - that's for a different thread. :p


View: https://www.youtube.com/watch?v=lkMRhCyazqM

Enjoying the dark cyberpunk future yet? :borg:🤖
 
AI integration has been a common part of what makes 90% of the mobile platforms a thing. iOS and Android would be terrible to use without it, it’s been upscaling and enhancing the pictures and video playback there for 10 years or more. Autocorrect, predictive text, audio filtering for input and output, spacial audio, signal noise reduction for wifi and cellular signals. It’s a long list, you can get into some fun semantics on what there is Machine Learning and which is AI but what ever you want to call them those accelerators have been in consumer goods for 10 years or more and the fact it’s only now coming as a mainstay to windows shows how much Microsoft and Intel dropped the ball on this.


Don't want any of that shit. My phones worked just fine in the oughts.

As the saying back then went, "I want my phone to be more like my computer, not my computer to be more like my phone."

Absolutely NO automation, unless I create, initiate and set it up myself. Think cron + custom script. Nothing pre-existing, pre-set up or defaulting to "on".

And no cloud ever.
 
Last edited:
Will these just become obsolete with the "AI" stuff Intel and AMD are putting into their CPUs?

This probably wouldn't be for your average consumer. At least not yet.

...until they are given the job, and 5 years later buildings and bridges start collapsing and planes start falling out of the sky.

The problem with AI is the misplaced trust. It will get the little things right most of the time, and will lull people into a false sense of confidence, and over time reviews of the output will be more and more cursory as misplaced trust gets built up.

Just like Teslas broadsiding 18 wheelers in broad daylight because people trusted them too much and started taking naps and reading the paper while "driving", AI induced bad failures are almost guaranteed.

You simply cant trust that shit.

These are both kind of shit examples because you're literally just coming up with contrived hypotheticals where we just completely and utterly throw all logic and safety out the window and blindly trust anything and everything despite the array of warnings.

There is no universe where something like a bridge would suddenly stop being subject to excessive validation. You don't just go "oh an AI generated the design" and throw literally every other step and safety precaution immediately out the window. You're just skipping years and years and years of development to jump to the alarmist ending.

And for self driving, it isn't Tesla's fault here you did exactly the thing you were explicitly warned not to while trying to live out your I Robot fantasy and woke up to screaming because you bisected a family dog that was 10 feet behind on a leash in a crosswalk and the car went through.

Like you obviously can't fucking sleep in a car you are driving. There is no valid defense. If you do this you are a fucking moron and need your license revoked permanently and more. You caused the accident after making the conscious choice to abuse something.
 
This probably wouldn't be for your average consumer. At least not yet.
100% true for the wrong reason you are thinking.

Consumers have been using the latest and greatest in consumer facing AI since 2014, it’s been a staple of what has made iOS and Android what it is.
Frankly the average consumer doesn’t use a PC, they work with them but they don’t use them, they do something specific on them then use their phone/tablet/console.
Intel and Microsoft are getting torn up because it’s now bad enough that an iPad is a better word processor, spreadsheet generator, and teleconferencing tool than most business laptops out there and Microsoft with the support of Intel and AMD needs to step up its game.

This is one of the first steps in many for Microsoft to stay relevant in the consumer market. They are loosing the feature and quality race to machines that are a fraction of the cost and far more reliable.
 
The big problem as I see it is any one smart enough to argue with an AI. example: Maths proofs are being done with AI but can you figure out as a mere human the internal logic of the AIs argument. Fujitsu's Horizon system , created by humans minds managed to wrongly say that hundreds of self employed British post masters were stealing from the till. Lives destroyed, many jailed, suicides etc. Its going to be like a sci fi novel in 20 years I reckon with the IT Amish (rebels)versus the AI Elite (empire).
 
The big problem as I see it is any one smart enough to argue with an AI. example: Maths proofs are being done with AI but can you figure out as a mere human the internal logic of the AIs argument. Fujitsu's Horizon system , created by humans minds managed to wrongly say that hundreds of self employed British post masters were stealing from the till. Lives destroyed, many jailed, suicides etc. Its going to be like a sci fi novel in 20 years I reckon with the IT Amish (rebels)versus the AI Elite (empire).
To be fair that faulty Fujitsu Horizon system was programmed in the late 1990s, way before AI as we know it today was a thing.
However, the flaws with AI will become apparent sooner than later, but it won't be in simple programming errors... :borg:

Elysium (2013):

View: https://www.youtube.com/watch?v=flLoSxd2nNY
 
Luddites are quite amusing. Of course, I still have fantasies of living on an old style ranch and performing manual labor, plowin' mah field, milkin' mah goats, and just living an honest life using nothing but the most advanced Amish technology.

On the other hand, if I am going to be using a computer at all, I might as well use whatever promises me the most powerful computational abilities, including graphics, AI, updated software and hardware, and go all out on RAM, CPU, GPU, and AI chip. Why would I be such a selective luddite and fully embrace modern technology, but suddenly "OH NO, I love my modern life but _________ is going too far!" as I type this on a rather advanced and nifty laptop or always-connected phone. What a pathetically small-minded opinion.

Either go all or nothing. Let's not play silly little games where all of a sudden science has gone too far, just starting here in 2024, rather than in 2011, or 1999, or even the dawn of the industrial era. Let's just keep walking forward into the future bravely, and focus carefully on using this technology wisely, rather than fear-mongering and practicing advanced hypocrisy.
 
Once all the sleaze bag grifting calms down machine learning will be an incredibly powerful tool for people.

In the meantime grifters will use AI to make almost everything a little worse than it was before.
 
Luddites are quite amusing. Of course, I still have fantasies of living on an old style ranch and performing manual labor, plowin' mah field, milkin' mah goats, and just living an honest life using nothing but the most advanced Amish technology.

On the other hand, if I am going to be using a computer at all, I might as well use whatever promises me the most powerful computational abilities, including graphics, AI, updated software and hardware, and go all out on RAM, CPU, GPU, and AI chip. Why would I be such a selective luddite and fully embrace modern technology, but suddenly "OH NO, I love my modern life but _________ is going too far!" as I type this on a rather advanced and nifty laptop or always-connected phone. What a pathetically small-minded opinion.

Either go all or nothing. Let's not play silly little games where all of a sudden science has gone too far, just starting here in 2024, rather than in 2011, or 1999, or even the dawn of the industrial era. Let's just keep walking forward into the future bravely, and focus carefully on using this technology wisely, rather than fear-mongering and practicing advanced hypocrisy.
At least they're self-sufficient. Why is it "small minded"? If the latest technology being hyped up isn't of any use for someone there's no reason for them to go out of their way to obtain or use it.
 
What do you guys think of this?
I suspect the "winning" combo will be cpu-gpu and cloud AI.

for stuff needed for rendering it will occur on GPU, for stuff used for other stuff will be a mix of cpu (they will all have some form of AI accelerator on board)-gpu and if you are ok with some latency cloud.

I've made it to 2024 without using anything AI,
Except if you ever used a search engine like google, played a video game and thousand of other applications, I am not even sure what that sentence could mean for a modern internet-computer user. Never used a speech to text in your life (by choice to your phone or that was the option where you called), google map direction type service, word auto complete, suggestion to you (amazon, youtube, etc...), I thihnk you underestimate how prevalent AI algorithm and/or machine learning have been, the 2017 revolution made possible for model to get much bigger but ML was a really common affair before that.
 
Last edited:
These are both kind of shit examples because you're literally just coming up with contrived hypotheticals where we just completely and utterly throw all logic and safety out the window and blindly trust anything and everything despite the array of warnings.

But that's what people do.

To be completely honest, it isn't AI that is the problem. It can be an incredibly useful tool,

Its the people that are the problem. In how they trust and interpret the output of the AI models.

I don't trust our AI future, not because I don't trust AI. AI is just a bunch of nested statistics. I don't trust AI because I don't trust people.
 
To be completely honest, it isn't AI that is the problem.
It could make it worst or better, but it is certainly not AI that is the problem, it is not like human did not trust your old fashion computer output in the past.

There is currently a giant scandal about the UK government being sued by a decade long (1999 to 2015) list of post employee that have been declared guilty of fraud because they believed what computers said over them:
https://en.wikipedia.org/wiki/British_Post_Office_scandal

If anything, the many form of AI that give people just the more likely type of result will be taken way less as fact and trusted than your current cpu-program output is. Hard to know but one would bet that if it was an modern blackbox AI telling us they were committing fraud instead of an faulty excel like accounting table we would have trusted it way less, not more.

And for most AI usage, AI does not need to be that good just better or cheaper than humans (say does this Lays chips look good enough to end up in the bag or should be discarded, letting an AI decide that was the trained on a lot of good looking and bad looking chips and just checking from time to time if things goes well is not a big deal, that will be a giant amount of the past, current and future use of AI)
 
But that's what people do.

To be completely honest, it isn't AI that is the problem. It can be an incredibly useful tool,

Its the people that are the problem. In how they trust and interpret the output of the AI models.

I don't trust our AI future, not because I don't trust AI. AI is just a bunch of nested statistics. I don't trust AI because I don't trust people.
eh, there will always be dumb people, its a bell curve. Our connectivity makes us more aware of the mistakes of others where as before we had to watch americas stupidest videos instead of just the constant bombardment.

Cameras used to take peoples souls, medicine was witchcraft, y2k was the end of computers.....we will keep moving on. We shouldnt be scared of AI yet anyways, not until it is real AI. Making nude actress photos or the presidents discord server might not end our species yet lol.
 
  • Like
Reactions: Down8
like this
eh, there will always be dumb people, its a bell curve.

Indeed it is, but the people I would trust with AI output are the exception, not the norm. Maybe 0.1% of all people? Even most with technical degrees wouldn't fall into that category for me.

AI should have remained a behind the scenes tool for the highly competent multiple advanced degrees type of people, not something pushed to the masses. Never underestimate how dumb the masses are.
 
Indeed it is, but the people I would trust with AI output are the exception, not the norm. Maybe 0.1% of all people? Even most with technical degrees wouldn't fall into that category for me.

AI should have remained a behind the scenes tool for the highly competent multiple advanced degrees type of people, not something pushed to the masses. Never underestimate how dumb the masses are.
I do feel it became too common too quickly. People barely understand how programs work or even websites are using AI. I use it here and there for search results or fix coding issues. Not the ultimate power some seem to think it is. I use it like i would google to point me in a direction to read about something.
 
I would trust with AI output are the exception, not the norm. Maybe 0.1% of all people? Even most with technical degrees wouldn't fall into that category for me.
How often would the AI output would require any trust ? How many would the validation that it work well enough just looking at the result would not tell you for yourself ?

If someone use AI to remove background noise in your phone call, .... any generative AI you can see the actual result, no trust needed for the video game asset, text to voice can be tested.

I would also go that it is not about people dumb or not at all, company making-using AI junk email finder having bad second unexpected effect will have nothing to do with them being dump or not.

Not the ultimate power some seem to think it is. I use it like i would google to point me in a direction to read about something.
I think that ultra common and put a little bit of a break on the, too common too quickly in term of regular popular using those tools directly day to day (outside students-developpers, not sure how common it is).
 
How often would the AI output would require any trust ? How many would the validation that it work well enough just looking at the result would not tell you for yourself?

I would argue that it should never be used for any purpose with real implications without 100% human verification of the outputs. I mean, sure, trivial nonsense like photoshop filters, who gives a shit, but for stuff that matters? Yeah, going to need intense validation with complete traceable requirements documentation, detailed architecture design, and understanding of not just what the outputs are, but also how the system arrived at them. Black boxes are never ok.

The validation of these nonsense black box models can never be strong enough.

This is - however - coming from someone with an FDA GPSV (General Principles of Software Validation) perspective, which may be a level of burden of proof above what most people consider reasonable.
 
. Black boxes are never ok.
What should we do with all the knowledge that AlphaFold has given to humanity, if the meteo prediction from them beat day after day the supercomputer that run an understood simulation should we just discard them ?

If the blackbox used to do speech to text seem to give a good result ?
 
What should we do with all the knowledge that AlphaFold has given to humanity, if the meteo prediction from them beat day after day the supercomputer that run an understood simulation should we just discard them ?

If the blackbox used to do speech to text seem to give a good result ?

Use the black box model to gather inputs that may have potential, but consider those inputs akin to untested theories until static hypotheses of why they are true have been formulated, and then tested, and they have either found to be true or rejected through the traditional peer reviewed scientific method.

Never, ever, never go straight from AI output to implementation without the above.
 
Use the black box model to gather inputs that may have potential, but consider those inputs akin to untested theories until static hypotheses of why they are true have been formulated, and then tested, and they have either found to be true or rejected through the traditional peer reviewed scientific method.
Not just to gather input, at the end it is to provide the used output.

But yes obviously every blackbox model is validated to give good result versus reality before giving credence to its result and often it provide output that will be tested before used.

Never, ever, never go straight from AI output to implementation without the above.
Who would ever make a ML system that does not validate it give good output ? I am not sure what you mean.

Will take AlphaFold, as the most famous and biggest success in recent ML history, do you have an issue with it or not ?
 
Who would ever make a ML system that does not validate it give good output ? I am not sure what you mean.

I'm not talking about automated validation the model does itself.

I'm talking that it must be done manually.

Subject matter experts must look at the ML output and formulate their own hypotheses, and then construct traditional static tests to confirm or deny those hypotheses.

The goal would be to make sure that nothing ever goes straight from model/automation and into implementation.

Will take AlphaFold, as the most famous and biggest success in recent ML history, do you have an issue with it or not ?

I have zero familiarity with alphafiold. First I've heard of those two words next to eachother. I'll have to read about it.

A brief googling suggests all AlphaFold does is predict protein structures using AI. Whether or not I take issue with it depends on how the predicted protein structures are used.

If they - post computation in the model - are verified by manual experimentation, I could be OK with that, as long as the experimentation follows traditional testing rigor.
 
I'm talking that it must be done manually.
It is always done manually, google when it make something like AlphaFold will manually validate that the output are good.

When they made the meteo blackbox, the first thing they did the next weeks is every day compare its prediction with actual meteo and see how good it is at it.

The goal would be to make sure that nothing ever goes straight from model/automation and into implementation.
Once the model is validate (like a weight balance has been calibrated), some system will use the output directly (when time is a factor for example), take a system like this:
https://www.agritechtomorrow.com/

There will be a lot of validation and farmer will look after every pass for issue manually, but when the system judge that a bug, mushroom, weeds, etc.... is a nocive one for the goal of the farm vs a good one to decide to burn it or not with a laser, it will not ask in every case for an human confirmation, do you have an issue ? you can obviously be sure that before being run on the world field that trial run to validate that it work ok, will have been done and that part of the training will have been human feedback to it.


A brief googling suggests all AlphaFold does is predict protein structures using AI. Whether or not I take issue with it depends on how the predicted protein structures are used.
To try to cure cancer, Alzheimer, etc... ML , generative AI was used to make the mRNA covid vaccine (that why the actual vaccine that we used was ready so ridiculously fast, https://bigthink.com/health/ai-mrna-vaccines-moderna/).
 
Luddites are quite amusing. Of course, I still have fantasies of living on an old style ranch and performing manual labor, plowin' mah field, milkin' mah goats, and just living an honest life using nothing but the most advanced Amish technology.
Many individuals still live this type of life, it's hardly a fantasy.

On the other hand, if I am going to be using a computer at all, I might as well use whatever promises me the most powerful computational abilities, including graphics, AI, updated software and hardware, and go all out on RAM, CPU, GPU, and AI chip. Why would I be such a selective luddite and fully embrace modern technology, but suddenly "OH NO, I love my modern life but _________ is going too far!" as I type this on a rather advanced and nifty laptop or always-connected phone. What a pathetically small-minded opinion.
I would agree with your sentiments as well... until you combine political and corporate agendas with said technologies.
The technology itself isn't what is worrying the "luddites", it's what comes next after everything has been implemented.

What China is now, how their AI is designed to control everyone via facial/body recognition, CBDC, and social credit scoring systems is what is going to happen to the rest of the world.
This isn't an if, it's a when that is going to be much sooner than later.

Either go all or nothing. Let's not play silly little games where all of a sudden science has gone too far, just starting here in 2024, rather than in 2011, or 1999, or even the dawn of the industrial era. Let's just keep walking forward into the future bravely, and focus carefully on using this technology wisely, rather than fear-mongering and practicing advanced hypocrisy.
It went too far in 2018 for China, and 2020 for the rest of the world.
Seeing how far AI has advanced in the last four years is utterly astounding, and dwarfs the advancements in all other technologies since.
 
Based on recent developments and Microsoft's move to integrate AI into Windows it looks like in the near future we will have AI Accelerator Cards in our PCs or not, who knows. Maybe Nvidia will just buy up these startups and integrate this into their GPUs.

What do you guys think of this? Personally, this doesn't excite me but who know what this may bring.

I would be annoyed if Microsoft required these cards to run Windows.


View: https://youtu.be/q0l7eaK-4po?si=QHdBKS3RdQ6rw4PK

https://www.pcworld.com/article/219...or-cards-from-memryx-kinara-debut-at-ces.html

AMD is putting an accelerator chip on their CPUs (and have already demoed it working with Microsoft's stuff) and maybe their GPUs, eventually.
 
  • Like
Reactions: psy81
like this
I would agree with your sentiments as well... until you combine political and corporate agendas with said technologies.
The technology itself isn't what is worrying the "luddites", it's what comes next after everything has been implemented.

What China is now, how their AI is designed to control everyone via facial/body recognition, CBDC, and social credit scoring systems is what is going to happen to the rest of the world.
This isn't an if, it's a when that is going to be much sooner than later.

That's an "over my dead body" proposition. I will literally die fighting the system before I ever submit to living under that kind of dystopia.

It went too far in 2018 for China, and 2020 for the rest of the world.
Seeing how far AI has advanced in the last four years is utterly astounding, and dwarfs the advancements in all other technologies since.

This is why it all needs to be set on fire. By any means necessary, this shit needs to be stopped. Absolutely nothing is off the table.
 
Back
Top