▲ | martinpw 3 days ago | |
Thanks for the additional information. I am still puzzled though. This sounds like it is a third party (maybe just a small group of devs?) doing all the work, and from your link they have had to beg AMD just to send them hardware? If this work was a significant piece of what is required to get ML users onto AMD hardware, wouldn't AMD just invest in doing this themselves, or at least provide much more support to these guys? | ||
▲ | spmurrayzzz 3 days ago | parent [-] | |
> This sounds like it is a third party (maybe just a small group of devs?) doing all the work Just as a quantitative side note here — tinygrad has almost 400 contributors, pytorch has almost 4,000. This might seem small, but both projects have a larger people footprint than most tech companies' headcount that are operating at significant scale. On top of that, consider that pytorch is a project with its origins at Meta, and Meta has internal teams that spend 100% of their time supporting the project. Coupled with the fact that Meta just purchased nearly 200k units worth of AMD inference gear (MI300X), there is a massive groundswell of tech effort being pushed in AMD's direction. > wouldn't AMD just invest in doing this themselves, or at least provide much more support to these guys? That was actually the point of George Hotz's "cultural test" (as he put it). He wanted to see if they were willing to part with some expensive gear in the spirit of enabling him to help them with more velocity. And they came through, so I think that's a win no matter which lens you analyze this through. Since resources are finite, especially in terms of human capital, there's only so much to go around. AMD naturally can now focus more on the software closer to the metal as a result, namely the driver. They still have significant stability issues in that layer they need to overcome, so letting the greater ML community help them shore up the deltas in other areas is great. |