> AMD’s software experience is riddled with bugs [...] AMD’s weaker-than-expected software Quality Assurance (QA) culture and its challenging out of the box experience.
This has anecdotally been true since forever. Back in the day, OpenCL implementations were passing conformance test but performance was poor. They could not turn hardware capabilities into performance for compute users. Drivers were buggy. Documentation was poor compared to NVidia's docs and forum. Offerings were inconsistent (look up Sycl from Codeplay) and ownership of what it is like to develop for AMD was unclear. The notion that it might not have improved or is only now improving is puzzling. It can't be for the lack of recognizing the problem. Intuitively it does not seem so difficult. I'm curious what the reasons are.
Even before ATI was acquired by AMD they had driver support problems.
When I was working for a Unix commercial graphics software company, the CTO told me how bad the information he received under ATI’s NDA was: different revisions of the same chipset had contradictory register settings, so the driver had to identify the revision before writing a value to the write-only configuration registers. The same chipset might need a 0 or a 1; writing the wrong values could crash the driver.
FWIW Back in 2015 OpenCL 2.0 performance was quite good on then-current AMD GPUs (IMO), but the problem was that
1. You had to implement everything yourself, from scratch, since AMD's GPU BLAS was barely compilable, and
2. They abandoned OpenCL that year, and switched to HIP (or whatever their copy of CUDA was called) which didn't even compile (in practice) for quite some time, and
3. Even with HIP, you were on your own when it comes for any BLAS and other standard library implementations because AMD provided nothing of the sorts for a long time.
All in all, it's not that the drivers performance was poor per se, but AMD did nothing about providing a software ecosystem, which amount to its hardware wasn't realistically usable unless your pockets were so big that you can do AMD's job and fund the re-development of the whole ecosystem from scratch.
In other words, it made MUCH better ROI to just use Nvidia, pay a little bit more for the hardware, and save millions on software :)
Cuda also compiles to PTX, which makes it much easier to distribute and therefore also easier for users to actually use. Doesn't matter that much when you're writing code for specific hardware like MI300X, but it's part of the developer story.
Mirrors the geohotz rants about AMD at the time, though as others point out this - 2024 - is ancient news in AI world and not quite sure what value it adds to the current discussions
Still rocking a 3090 so can't speak from experience but general vibe around simple at home inference seems like it has improved (esp since both vulkan and rocm are now viable paths on newer cards).
>development using pytorch
Would probably still play it nvidia safe for more adventurous stuff than token generation even if it has improved
Please just get everything in PyTorch to work, and work well (and across all graphics cards too). This is the starting point and it doesn't matter how you do it. But the fact you cannot even do some very basic stuff on AMD is going to mean you are left unused by researchers, so getting further up the stack is going to be almost impossible.
Does PyTorch not work on AMD cards? I remain very nervous about returning to the AMD ecosystem. On paper AMD has been a compelling choice for GPGPU work for years, up until it turns out the hardware can't actually do what it claims. But the PyTorch problem seemed to be largely solved years. The issues weren't on the application layer, it was crippling firmware bugs that they didn't seem interested in getting a handle on. PyTorch ran fine until the computer kernel paniced or whatever, but that isn't a PyTorch problem.
Just, in this case means “at minimum” or “first and foremost, no excuses”. I obviously understand this is a huge undertaking. Nobody said attempting to be competitive with NVIDIA in AI would be a walk in the park.
I wonder if hiring is a big factor here. I presume, all the really good systems+parallel programmers would rather gain more experience on NVIDIA hardware than AMD, so given the choice, they'd go with NVIDIA. Does AMD do enough to win them over?
If AMD's betting the company on their AI compute, they had best follow the advice in the article because the only way to compete with NVIDIA is to meet/exceed not just the performance but also the DevX.
These days it's for sure the dev environment that is lacking, hardware is okay (potentially great?!), software abysmal. To run a local llm in a stable manner implies using Vulkan.. any attempt at ROCm is totally hamstrung by haphazard support of hardware alongside with an online presence poisoned by people primarily discussing work-arounds rather than work when it comes to AMD as a platform. Argh.
NVIDIA has such a big moat around their CUDA architecture such that I don't think AMD will ever be able to outcompete them in AI compute unless they somehow find 2-3 nobel prize level breakthroughs today.
And yet hardware companies with good software are the exception, not the norm. Is it just the cultural mismatch between hardware and software development life cycles and planning philosophies, or is there more to it?
Nvidia had the first movers advantage. Nvidia spent so many years perfecting CUDA to work well with PyTorch. Before ROCM, there was only CUDA. There were so many developers building their use cases on top of PyTorch+CUDA, and bringing all that feedback to PyTorch, this made CUDA battle ready and stable. AMD can get there, especially now with demand for compute, but as someone already said here, the biggest focus needs to be on PyTorch
27 comments:
> AMD’s software experience is riddled with bugs [...] AMD’s weaker-than-expected software Quality Assurance (QA) culture and its challenging out of the box experience.
This has anecdotally been true since forever. Back in the day, OpenCL implementations were passing conformance test but performance was poor. They could not turn hardware capabilities into performance for compute users. Drivers were buggy. Documentation was poor compared to NVidia's docs and forum. Offerings were inconsistent (look up Sycl from Codeplay) and ownership of what it is like to develop for AMD was unclear. The notion that it might not have improved or is only now improving is puzzling. It can't be for the lack of recognizing the problem. Intuitively it does not seem so difficult. I'm curious what the reasons are.
Even before ATI was acquired by AMD they had driver support problems.
When I was working for a Unix commercial graphics software company, the CTO told me how bad the information he received under ATI’s NDA was: different revisions of the same chipset had contradictory register settings, so the driver had to identify the revision before writing a value to the write-only configuration registers. The same chipset might need a 0 or a 1; writing the wrong values could crash the driver.
FWIW Back in 2015 OpenCL 2.0 performance was quite good on then-current AMD GPUs (IMO), but the problem was that 1. You had to implement everything yourself, from scratch, since AMD's GPU BLAS was barely compilable, and 2. They abandoned OpenCL that year, and switched to HIP (or whatever their copy of CUDA was called) which didn't even compile (in practice) for quite some time, and 3. Even with HIP, you were on your own when it comes for any BLAS and other standard library implementations because AMD provided nothing of the sorts for a long time.
All in all, it's not that the drivers performance was poor per se, but AMD did nothing about providing a software ecosystem, which amount to its hardware wasn't realistically usable unless your pockets were so big that you can do AMD's job and fund the re-development of the whole ecosystem from scratch.
In other words, it made MUCH better ROI to just use Nvidia, pay a little bit more for the hardware, and save millions on software :)
Cuda also compiles to PTX, which makes it much easier to distribute and therefore also easier for users to actually use. Doesn't matter that much when you're writing code for specific hardware like MI300X, but it's part of the developer story.
Mirrors the geohotz rants about AMD at the time, though as others point out this - 2024 - is ancient news in AI world and not quite sure what value it adds to the current discussions
Has this changed, If I want to go hands on with development using pytorch or whatever is used now, would you recommend an AMD card?
Genuine question, I have not followed this topic closely for years :)
Still rocking a 3090 so can't speak from experience but general vibe around simple at home inference seems like it has improved (esp since both vulkan and rocm are now viable paths on newer cards).
>development using pytorch
Would probably still play it nvidia safe for more adventurous stuff than token generation even if it has improved
Please just get everything in PyTorch to work, and work well (and across all graphics cards too). This is the starting point and it doesn't matter how you do it. But the fact you cannot even do some very basic stuff on AMD is going to mean you are left unused by researchers, so getting further up the stack is going to be almost impossible.
Does PyTorch not work on AMD cards? I remain very nervous about returning to the AMD ecosystem. On paper AMD has been a compelling choice for GPGPU work for years, up until it turns out the hardware can't actually do what it claims. But the PyTorch problem seemed to be largely solved years. The issues weren't on the application layer, it was crippling firmware bugs that they didn't seem interested in getting a handle on. PyTorch ran fine until the computer kernel paniced or whatever, but that isn't a PyTorch problem.
The problem is "just". "Just" getting pytorch to work and to work well is a huge undertaking.
Just, in this case means “at minimum” or “first and foremost, no excuses”. I obviously understand this is a huge undertaking. Nobody said attempting to be competitive with NVIDIA in AI would be a walk in the park.
for a trillion dollars, they should be able to figure it out.
Correction: Why wasn't it competitive 2 years ago; basically half the AI summer ago.
I wonder if hiring is a big factor here. I presume, all the really good systems+parallel programmers would rather gain more experience on NVIDIA hardware than AMD, so given the choice, they'd go with NVIDIA. Does AMD do enough to win them over?
Please amend the title, this is a December 2024 article and the conclusions are misleading in 2026
If AMD's betting the company on their AI compute, they had best follow the advice in the article because the only way to compete with NVIDIA is to meet/exceed not just the performance but also the DevX.
These days it's for sure the dev environment that is lacking, hardware is okay (potentially great?!), software abysmal. To run a local llm in a stable manner implies using Vulkan.. any attempt at ROCm is totally hamstrung by haphazard support of hardware alongside with an online presence poisoned by people primarily discussing work-arounds rather than work when it comes to AMD as a platform. Argh.
Is there any benefit of Vulcan vs ROCm on a card where ROCm is fully supported?
You can't have good performance without good DevX. There's a reason why we get a new python dsl for nvidia GPUs every week.
NVIDIA has such a big moat around their CUDA architecture such that I don't think AMD will ever be able to outcompete them in AI compute unless they somehow find 2-3 nobel prize level breakthroughs today.
This is from more than 2 years ago, why post this now?
I love how they just butcher that article.
I remember when it came out a little over a year ago, and its just as wrong as it is today as it was then.
[2024]
The important part of Hardware, is Software
After all, if the Software does not work, its just a Paperweight
And yet hardware companies with good software are the exception, not the norm. Is it just the cultural mismatch between hardware and software development life cycles and planning philosophies, or is there more to it?
AMD just doesn't seem to be that good at software.
Nvidia had the first movers advantage. Nvidia spent so many years perfecting CUDA to work well with PyTorch. Before ROCM, there was only CUDA. There were so many developers building their use cases on top of PyTorch+CUDA, and bringing all that feedback to PyTorch, this made CUDA battle ready and stable. AMD can get there, especially now with demand for compute, but as someone already said here, the biggest focus needs to be on PyTorch