ZLUDA Allows CUDA Binaries to Run on AMD GPUs

GDI Lord

Limp Gawd
Joined
Jan 14, 2017
Messages
306
AMD ROCm Solution Enables Native Execution of NVIDIA CUDA Binaries on Radeon GPUs
Published One hour ago by Hilbert Hagedoorn

https://www.guru3d.com/story/amd-ro...ution-of-nvidia-cuda-binaries-on-radeon-gpus/

"AMD has introduced a solution using ROCm technology to enable the running of NVIDIA CUDA binaries on AMD graphics hardware without any modifications. This project, known as ZLUDA, was discreetly backed by AMD over two years. Initially aimed at making CUDA compatible with Intel graphics, ZLUDA was reconfigured by developer Andrzej Janik, who was hired by AMD in 2022, to also support AMD's Radeon GPUs through the HIP/ROCm framework. The project, which took two years to develop, allows for CUDA compatibility on AMD devices without the need to alter the original source code."

I'm pretty darn impressed, TBH. Not very impressed that both Intel and AMD dropped Andrzej, but impressed that it works - better than I expected.


Previously mention by erek in 2020: https://hardforum.com/threads/zluda-project-paves-the-way-for-cuda-on-intel.2004052/
 
Interesting.

I've always been 100% against proprietary standards, and for cross-compatibility and open standards, so this is a positive development as far as I am concerned, but I wonder if there will be a patent fight over this? I can't imagine Nvidia has left themselves wide open to something like this coming in and eating their lunch.

I mean, after all Nvidia has adopted the old Intel "lawyer up, sue everyone and everything, take over the industry" approach, I can't imagine they wouldn't "cease and desist" this in 10 seconds flat.
 
https://github.com/vosen/ZLUDA

4 years old project, but was for Intel hardware and abandoned for a while before AMD started to get involved with it.
https://thenewstack.io/zluda-a-cuda-for-intel-gpus-needs-a-new-maintainer/

Apparently Acoding language are fair use everyone can make a compiler for them, in the past I did work for a company that added Flash support in a totally reverse engineered way on mobile platform that did not had flash support (pre smarthphone revolution) and it felt legal.

https://www.quora.com/Are-all-programming-languages-in-the-public-domain

Google did did for java API for example:
https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_Inc.


And AMD was actively involved in that project, making it sound on the legal side of things.
 
Last edited:
I was gonna post this myself but GDI Lord beat me to it. This is gonna hurt Nvidia's bottom line if this tool does allow developers to run CUDA on AMD hardware. I wonder how this will effect AI?

https://www.phoronix.com/review/radeon-cuda-zluda
Same. I like this statement:

"Andrzej Janik reached out and provided access to the new ZLUDA implementation for AMD ROCm to allow me to test it out and benchmark it in advance of today's planned public announcement. I've been testing it out for a few days and it's been a positive experience: CUDA-enabled software indeed running atop ROCm and without any changes. Even proprietary renderers and the like working with this "CUDA on Radeon" implementation."
 
This is gonna hurt Nvidia's bottom line if this tool does allow developers to run CUDA on AMD hardware.
A bit of a dangerous bet, there is probably a reason why AMD left the project and Intel before them and that Nvidia made CUDA such an open API with open specs to make this easy to do.

https://www.tomshardware.com/pc-components/gpus/software-allows-cuda-code-to-run-on-amd-and-intel-gpus-without-changes-zluda-is-back-but-both-companies-ditched-it-nixing-future-updates#:~:text=Just like Intel, AMD took,CUDA applications on AMD GPUs."
Subsequently, Janik left Intel and got in touch with AMD, which signed a contract concerning ZLUDA development. Just like Intel, AMD took its time evaluating ZLUDA and asked for ZLUDA to remain private before it came to a decision. Eventually, AMD made the same conclusion as Intel, that "there is no business case for running CUDA applications on AMD GPUs." Janik was then released from the contract and could finally bring ZLUDA back publicly.

Could be that making CUDA more popular is not necessarily a bad thing for Nvidia, as long as they have the best parrallel and hardware stack, best CUDA performance and most complete CUDA support. Nvidia could still release new extension with proprietary hardware support and have everyone else play catchup running an API/language in someone else hands control. And the CUDA environment is quite vast, that a subset of it being supported.
 
Last edited:
https://github.com/vosen/ZLUDA

4 years old project, but was for Intel hardware and abandoned for a while before AMD started to get involved with it.
https://thenewstack.io/zluda-a-cuda-for-intel-gpus-needs-a-new-maintainer/

Apparently Acoding language are fair use everyone can make a compiler for them, in the past I did work for a company that added Flash support in a totally reverse engineered way on mobile platform that did not had flash support (pre smarthphone revolution) and it felt legal.

https://www.quora.com/Are-all-programming-languages-in-the-public-domain

Google did did for java API for example:
https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_Inc.


And AMD was actively involved in that project, making it sound on the legal side of things.
Emphasis on was… AMD also abandoned it.
It’s currently maintained by one guy who is getting it working specifically for the projects is using it for as he has neither the time nor the funding not so more with jt.
 
I was gonna post this myself but GDI Lord beat me to it. This is gonna hurt Nvidia's bottom line if this tool does allow developers to run CUDA on AMD hardware. I wonder how this will effect AI?

https://www.phoronix.com/review/radeon-cuda-zluda
Nah, Intel and AMD abandoned it, it only makes them look bad.
The preliminary results for many tasks have CUDA through ZLUDA running better than their prospective “optimized” libraries, but still a long way shy of native Nvidia hardware.
ZLUDA only serves as an advertisement as to why you should have just bought Nvidia hardware to begin with. And that’s why Intel and AMD both abandoned it.
 
I doubt it. Convenience vs effort. Most consumers aren't enthusiasts like we are.

Most consumers don't give a rats ass about CUDA. The people who do are generally the iid. Willing to out in the extra effort, I think.
 
Most consumers don't give a rats ass about CUDA. The people who do are generally the iid. Willing to out in the extra effort, I think.
Well not CUDA directly… but DLSS and all its features is a function built on CUDA libraries as are many of the Ray Tracing optimizations.
 
Last edited:
I doubt it. Convenience vs effort. Most consumers aren't enthusiasts like we are.
We're talking about CUDA and ROCM, so it's not like someone is just sitting at home casually using these systems. AMD's biggest weakness and the reason they're so behind with AI is terrible use of compute on their GPU's. Fix this and a lot of Nvidia buyers might just go with the cheaper and probably worse performance option.
A bit of a dangerous bet, there is probably a reason why AMD left the project and Intel before them and that Nvidia made CUDA such an open API with open specs to make this easy to do.

https://www.tomshardware.com/pc-components/gpus/software-allows-cuda-code-to-run-on-amd-and-intel-gpus-without-changes-zluda-is-back-but-both-companies-ditched-it-nixing-future-updates#:~:text=Just like Intel, AMD took,CUDA applications on AMD GPUs."
Subsequently, Janik left Intel and got in touch with AMD, which signed a contract concerning ZLUDA development. Just like Intel, AMD took its time evaluating ZLUDA and asked for ZLUDA to remain private before it came to a decision. Eventually, AMD made the same conclusion as Intel, that "there is no business case for running CUDA applications on AMD GPUs." Janik was then released from the contract and could finally bring ZLUDA back publicly.

Could be that making CUDA more popular is not necessarily a bad thing for Nvidia, as long as they have the best parrallel and hardware stack, best CUDA performance and most complete CUDA support. Nvidia could still release new extension with proprietary hardware support and have everyone else play catchup running an API/language in someone else hands control. And the CUDA environment is quite vast, that a subset of it being supported.
It might be AMD's way of sidestepping legal issues? AMD did hire the guy, so maybe this is AMD's way of avoiding legal issues with Nvidia. I doubt AMD could just implement Cuda support without Nvidia's legal team getting involved. Could also be that AMD doesn't want to support a standard that gives Nvidia more power? Maybe AMD is more into HIP? The results are promising.
 
  • Like
Reactions: pavel
like this
I doubt AMD could just implement Cuda support without Nvidia's legal team getting involved.
you can build CUDA without NVCC:
https://llvm.org/docs/CompileCudaWithLLVM.html

That was part of Google effort to run cuda application on their hardware I think.

programming language are really hard to intellectually protect, if someone decide to compile that code to make binary that run on something else would be hard to argue with legally I feel like, and it has been somewhat common for CUDA
https://www.intel.com/content/www/u...tic-new-cuda-to-sycl-code-migration-tool.html
https://github.com/cupbop/CuPBoP

Unlike the example above, being able to run part of the binary compiled by NVCC on non Nvidia hardware is maybe more touchy (legally or license wise and could have been interesting for AMD if it was a third party doing it), I have no idea here.

Most consumers don't give a rats ass about CUDA. The people who do are generally the iid. Willing to out in the extra effort, I think.
Lot of people in academic-research space can have access to nice IT department, but convenience (and speed, student doing a master do not have necessarily a lot of time to setup themselve worth it because it will for a long endeavor) will be in that space extremely valuable, the end user will not be that computer savvy often. Python in that space will be extremely popular for the easiness/conveniance and PyTorch replaced TensorFlow for the same reason, much easier to use, specially for non-programmer.
 
Last edited:
Python in that space will be extremely popular for the easiness/conveniance and PyTorch replaced TensorFlow for that reason.
TensorFlow was painful to manage memory allocation super easy to create memory leaks that ruined output later down the line. PyTorch’s memory management was reason enough to migrate everything else it did better was icing.
 
This is great news - I've always been against Nvidia's love affair with any proprietary standard they can create, buy, or lock down legally. We've already seen the benefits with things like FSR3 allowing frame generation to be used even on Nvidia 3000 series cards which they did not deign to allow the feature to be supported with DLSS frame generation unless you had a 4000 series. As far as things like ROCm, I always favor the open standard and hope for AMD to continue to support a certain degree of compute/RT etc. for next gen just as they did better with the 7000 albeit not up to Nvidia's parallel offerings in those specific usages if you benchmarked it al, but still easily good enough in most use cases. Considering that AI as the next big buzzword is not going away and the use of compute being available for different workloads, I can only hope that it will become increasingly easy for projects to break away from the expectation of CUDA /"The NVidia preferred way of doing things in any given situation" being either the only option or so much better that its not worth doing anything else.

When it comes to AI / LLM or other types of compute related workloads, even for enthusiast-ish stuff like settign up your own locally hosted Stable Diffusion, or LLaMa instance it was sort of assumed for awhiel that you were on NV GPUs and the "easy" setups were predicated on them. Now there were ways in many cases to configure it to run just fine on an AMD GPU, but it wasn't included in the standard easy setup install or required a more extensive process which meant when the original unified project updated to better components it was often left behind or at least more "manual', leading to people thinkin that "only Nvidia is good at / capable of this kind of workload" when they next chose a GPU. Having something like ZLUDA that easily allowed AMD cards to run stuff created for CUDA would be a big step forward without manually having to set everything to an alternative. Granted, its not a long term ideal solution but much the same way that Proton and Wine lead to greater Linux usage and eventually native gaming support, its preferable to not having the option. With luck all hardware and software will trend toward open standards instead of proprietary ones, but the more barriers that can be broken down the better in the end.
 
Great news. I have been using AMD CPUs for 3 generations now, and I would have prefereed to get an AMD GPU, but I use Adobe Lightroom and Photoshop a lot and it seemed that Adobe software was optimized for CUDA. Now when I upgrade in a year or two max, I will definitely get an AMD GPU. Adobe is starting to do a lot with AI. People have posted in Adobe forums how some AI-based features are dramatically faster with an upgraded GPU.

For me, it's a strong motivation to NOT get NVidia.
 
If you're really making use of CUDA then you'll still be better off biting the bullet and going with Nvidia. This whole announcement is about ZLUDA no longer being developed at full-speed by the original dev; Nvidia could easily make changes that render it completely unusable if they want.

The Phoronix article has some more info and a decent forum thread going.
 
Last edited:
If you're really making use of CUDA then you'll still be better off biting the bullet and going with Nvidia. This whole announcement is about ZLUDA no longer being developed at full-speed by the original dev; Nvidia could easily make changes that render it completely unusable if they want.

The Phoronix article has some more info and a decent forum thread going.
Going Nvidia may not always be an option, at least not a cost effective option. What's more interesting of ZLUDA is that CUDA code ran faster on AMD hardware than AMD's own HIP. I'm not sure what this means exactly but there's certainly potential if AMD put more resources in their HIP/ROCm. A translation layer like ZLUDA has a tendency to decrease performance, not increase it. Might mean that CUDA code is better written than HIP/ROCm code.
blender-bmw27-nvidia-cuda.svgz

blender-bmw27-radeon-hip.svgz
 
If you're really making use of CUDA then you'll still be better off biting the bullet and going with Nvidia. This whole announcement is about ZLUDA no longer being developed at full-speed by the original dev; Nvidia could easily make changes that render it completely unusable if they want.

The Phoronix article has some more info and a decent forum thread going.
I don't think anyone doesn't think that. Ultimately this is mostly "interesting" and not a real solution.
But it does also show that there is no hardware reason why CUDA is proprietary. Other than for the obvious "money reasons" or ultimately "anti-competitive" reasons.

The thing that's funny there is that nVidia could make CUDA open source (they never will), which AMD would struggle for at least 2 gens to catch up on, while nVidia would continue to maintain their CUDA dominance and could point towards AMD's poor performance using CUDA as more reasons to buy nVidia. Of course eventually that would erode which is why they never will because of the long term. Not to mention how that would affect companies such as Amazon building ASICs and of course how that would shape AI.

However nVidia's position has the alternate problem: which has been brought up by people like Lakados that basically every dev wants to get off of CUDA libraries as fast as possible because nVidia's costs are insane.
 
  • Like
Reactions: pavel
like this
Going Nvidia may not always be an option, at least not a cost effective option. What's more interesting of ZLUDA is that CUDA code ran faster on AMD hardware than AMD's own HIP. I'm not sure what this means exactly but there's certainly potential if AMD put more resources in their HIP/ROCm. A translation layer like ZLUDA has a tendency to decrease performance, not increase it. Might mean that CUDA code is better written than HIP/ROCm code.
blender-bmw27-nvidia-cuda.svgz

blender-bmw27-radeon-hip.svgz
There’s no might be about it, CUDA libraries are incredibly well optimized.
A while back articles about Nvidia using their AI to “write drivers” were popping up all over, while not true they weren’t wrong about AI usage just its application. They used it to go over many of their huge code bases and have it offer up pages of documentation on how to optimize and streamline it.
 
Might mean that CUDA code is better written than HIP/ROCm code
Could be how good CUDA compiler are (or the PTX to assembly compiler part), giant amount of work to create optimised machine code that went into those, not necessarily that because cuda is older or more common coded by programmer that know how to write it better.

There 2 steps:
24-Figure2.6-1.png
 
However nVidia's position has the alternate problem: which has been brought up by people like Lakados that basically every dev wants to get off of CUDA libraries as fast as possible because nVidia's costs are insane.
They are insane for licensing, but much of that is when using platforms like VMWare, which with their new licensing costs can die in a fire, that is ungodly not sure what Broadcom is thinking with these new prices but nope not today Satan.
Nvidia Grid licensing has come down, so that's good at least, but you have to use their online licensing servers they no longer support stand-alone so if things expire things stop working correctly...
It's not so much we want to get off CUDA libraries (because they work damned well), but we don't like there not being options, nobody likes a lack of options.

So while I can grumble that they have me pinned down in a bear hug, at least it's a warm one? But the options exist as pay Nvidia and things work smoothly and reliably with little to no headaches and a great 1800 number that I personally have never had to wait more than 5 minutes on hold for. Or don't pay Nvidia and hope for the best, which usually leaves you crawling back to Nvidia after your third emergency downtime because something crashed again.
 
Not surprising that Intel and AMD both stopped supporting the development. It just makes both of them look bad.

Cuda thru translater ZLUDA is faster on AMD than OpenCL on AMD... lol that just tells someone that Nvidia's libraries are better.

What scenario would someone be in where this is useful??

Starting point: Something written for Cuda. So you are already in the Nvidia ecosystem. Here is ZLUDA, it lets you run your stuff on AMD, but it's slower. It's interesting from a technical perspective but just goes to show how far behind Intel and AMD really are. Maybe a high-school level program trying to teach Cuda programming could use it to save money on hardware.. but they are still teaching and using CUDA... so who really wins? Not AMD or Intel that's for sure.

Why does Nvidia need to make the libraries open source? They are already the best there is. If you have spent billions in R&D, you probably are not going to just give it away. AMD's 'solution' isn't even remotely better because it is open-source. AMD writes up their equivalent set of libraries, makes them open source and then throws them out to the open-source community "Here you go, have fun!". Then stops supporting it in any real way, at least in any way that requires investment. Followed by fancy speech "We at AMD support Open-source, and just released XYZ as open-source to the community. We think this is better than proprietary solutions, because reasons...". Hasn't yet turned out to be better.

Open-source =! better. It can be, but it isn't in this case. If AMD gave an actual shit, they would spend more on their open-source libraries. They took the economical approach by choosing open-source -> let someone else do the work of making it decent. I don't see how this is 'laudable', they are just using you.
 
Last edited:
Why does Nvidia need to make the libraries open source? They are already the best there is. If you have spent billions in R&D, you probably are not going to just give it away. AMD's 'solution' isn't even remotely better because it is open-source. AMD writes up their equivalent set of libraries, makes them open source and then throws them out to the open-source community "Here you go, have fun!". Then stops supporting it in any real way, at least in any way that requires investment. Followed by fancy speech "We at AMD support Open-source, and just released XYZ as open-source to the community. We think this is better than proprietary solutions, because reasons...". Hasn't yet turned out to be better.

Open-source =! better. It can be, but it isn't in this case. If AMD gave an actual shit, they would spend more on their open-source libraries. They took the economical approach by choosing open-source -> let someone else do the work of making it decent. I don't see how this is 'laudable', they are just using you.

AMD probably open sources things to make big tech customers more comfortable running their stuff. And maybe get an up on NVidia for specific uses that way.
 
Why does Nvidia need to make the libraries open source? They are already the best there is. If you have spent billions in R&D, you probably are not going to just give it away. AMD's 'solution' isn't even remotely better because it is open-source. AMD writes up their equivalent set of libraries, makes them open source and then throws them out to the open-source community "Here you go, have fun!". Then stops supporting it in any real way, at least in any way that requires investment. Followed by fancy speech "We at AMD support Open-source, and just released XYZ as open-source to the community. We think this is better than proprietary solutions, because reasons...". Hasn't yet turned out to be better.
The idea is that AMD is better off being the leader of an open standard than trying to compete with Nvidia in another closed standard. I think AMD gave up on GPU compute when the crypto market crashed, because they thought there was no use for making better GPU compute. Then ChatGPT comes out and makes good use of GPU compute, thus revitalizing Nvidia's polished CUDA standard. Even George Hotz has given up on AMD.


View: https://youtu.be/x_aSeTmb68k?si=9LdjMuA9vI9Rt--P
Open-source =! better. It can be, but it isn't in this case. If AMD gave an actual shit, they would spend more on their open-source libraries. They took the economical approach by choosing open-source -> let someone else do the work of making it decent. I don't see how this is 'laudable', they are just using you.
Open source is always better and that's the direction the industry is moving towards. I'm sure if CUDA was open sourced then it would be much better, but again that's not to the benefit of Nvidia. It just so happens that Nvidia was the only company who put a lot of effort into GPU compute and it payed off. Not even Intel put any effort into this, and now they are because Nvidia is taking away their market share.
 
Last edited:
Not surprising that Intel and AMD both stopped supporting the development. It just makes both of them look bad.

Cuda thru translater ZLUDA is faster on AMD than OpenCL on AMD... lol that just tells someone that Nvidia's libraries are better.

What scenario would someone be in where this is useful??

Starting point: Something written for Cuda. So you are already in the Nvidia ecosystem. Here is ZLUDA, it lets you run your stuff on AMD, but it's slower. It's interesting from a technical perspective but just goes to show how far behind Intel and AMD really are. Maybe a high-school level program trying to teach Cuda programming could use it to save money on hardware.. but they are still teaching and using CUDA... so who really wins? Not AMD or Intel that's for sure.

Why does Nvidia need to make the libraries open source? They are already the best there is. If you have spent billions in R&D, you probably are not going to just give it away. AMD's 'solution' isn't even remotely better because it is open-source. AMD writes up their equivalent set of libraries, makes them open source and then throws them out to the open-source community "Here you go, have fun!". Then stops supporting it in any real way, at least in any way that requires investment. Followed by fancy speech "We at AMD support Open-source, and just released XYZ as open-source to the community. We think this is better than proprietary solutions, because reasons...". Hasn't yet turned out to be better.

Open-source =! better. It can be, but it isn't in this case. If AMD gave an actual shit, they would spend more on their open-source libraries. They took the economical approach by choosing open-source -> let someone else do the work of making it decent. I don't see how this is 'laudable', they are just using you.
Exactly. Already, another member here, Lakados said this: "ZLUDA only serves as an advertisement as to why you should have just bought Nvidia hardware to begin with. And that’s why Intel and AMD both abandoned it."
You said the same thing and I've read this sentiment on other sites - including Phoronix (source of article cited).

I know there's a lot of AMD supporters on this site but they really are an incompetent company and they've had those accusations before - they have had 'successes' or 'exceptions' in which they ironically released a good product - AM5 for e.g. and their (latest) Threadripper seems pretty good if you need that and can afford it.

But, their software stack - their investment in Compute has been sorely lacking and has been slow at progress. The FACT that this was only an individual and he was originally employed by Intel - should speak volumes. Intel figure out - they don't really want to risk potential legal issues & have the idea that 'you should have just picked the "original" (Nvidia)' out there probably isn't good for advertising - and although companies that buy tons of graphics cards for data centers/AI business - might want a good deal - the ZLUDA situation does leave a bit uncertainty, right?

Someone said here - if Nvidia decides to implement something that MIGHT make the continued progress of using ZLUDA a bit complicated - well, is AMD going to re-hire the guy to address that?

To me, this just confirms or at least suggests that AMD has struggled to compete with their own HIP/ROCm - which ZLUDA is able to surpass HIP in performance. AMD's own ray tracing i.e. HIP-RT has been a disaster.
 
And now we know why AMD stopped funding ZLUDA.
https://www.tomshardware.com/pc-com...tly-targets-zluda-and-some-chinese-gpu-makers

"The restriction appears to be designed to prevent initiatives like ZLUDA, which both Intel and AMD have recently participated, and, perhaps more critically, some Chinese GPU makers from utilizing CUDA code with translation layers. We've pinged Nvidia for comment and will update you with additional details or clarifications when we get a response."

 
And now we know why AMD stopped funding ZLUDA.
https://www.tomshardware.com/pc-com...tly-targets-zluda-and-some-chinese-gpu-makers

"The restriction appears to be designed to prevent initiatives like ZLUDA, which both Intel and AMD have recently participated, and, perhaps more critically, some Chinese GPU makers from utilizing CUDA code with translation layers. We've pinged Nvidia for comment and will update you with additional details or clarifications when we get a response."

View attachment 639553
Intel moved to a code translation tool that converts your CUDA code to OneAPI some 2 years back. Maybe because they could see this happening, because it’s something they would do?

But tools like Zulda are a double edged sword, yes it lets you run your competitors code, but it also then opens up direct comparisons.
Competition is good, being able to compare apples to apples is good. But often in these cases the desire to look at non Nvidia hardware while maintaining CUDA compatibility comes because the Nvidia hardware is expensive and the competition is cheaper. If the competition can get 1 to 1 performance then great, it then becomes a straight question of warranty, stability, and support, and are the differences there worth the price difference to you. But if performance isn’t 1 to 1 now you have a direct comparison point, it’s 15% slower and 20% cheaper is that 5% worth it? What if you then use a cheaper Nvidia option that performs about the same as that cheaper product then you look at your other use cases and how they perform.

ZULDA for everything it can offer a platform also goes to advertise and display its deficiencies, and sometimes allowing more comparisons isn’t in your best interest.
 
This was added to the eula fall 2021 ....

The article just point that the up to date EULA was added correctly to the installer, which sound like a simple file mistake:
Nvidia has banned running CUDA-based software on other hardware platforms using translation layers in its licensing terms listed online since 2021, but the warning previously wasn't included in the documentation placed on a host system during the installation process.

Good luck to them to try to enforce an EULA in China... specially in a context that they cannot buy Nvidia GPU to run CUDA according to our rules.... how stupid would it be for them to care about some eula in that context...
 
Last edited:
This was added to the eula since fall 2021 ....
Nvidia is altering the deal. Pray they don't alter it any further.

"Nvidia has banned running CUDA-based software on other hardware platforms using translation layers in its licensing terms listed online since 2021, but the warning previously wasn't included in the documentation placed on a host system during the installation process. This language has been added to the EULA that's included when installing CUDA 11.6 and newer versions."
 
Chinese companies working to reverse engineer the CUDA libraries and accelerate them on their homegrown GPUs was the first thing they started the second the US imposed their first sanction on US hardware.
Honestly probably before hand, I recall MT advertising that their GPUs would support CUDA code back in 2020, and I’m pretty sure they demoed it doing so with their S2000 some time in 2022?
In China their EULA isn’t worth the bandwidth it takes to download it, so it’s at best a stopgap against “legitimate” companies, but CUDA’s time is limited Nvidia knows it and they are assuredly working on their next big must have that is exclusive to them that will keep them on top.
 
Last edited:
Nvidia is altering the deal. Pray they don't alter it any further.
I am not sure the EULA being online or local matter at all, at least not for AMD layers (and they would know how little they change what they can do or not), they are often show for average people with no actual legal teeth
 
I am not sure this new EULA clause would survive a serious legal challenge in the US either...
There is court precedent to consider de-assembly and others of the mentionned affair to run compiled code on a different hardware restricted by the EULA here being fairuse yes (like when a company made a translater to play playstation game on a computer)
 
Back
Top