Remix.run Logo
IshKebab 3 days ago

I disagree - training enormous LLMs is super complex and requires a data centre... But most research is not done at that scale. If you want researchers to use your hardware at scale you also have to make it so they can spend a few grand and do small scale research with one GPU on their desktop.

That's how you get things like good software support in AI frameworks.

jlei523 3 days ago | parent [-]

I disagree with you. You don't need researchers to use your client hardware in order to make inference chips. All big tech are making inference chips in house. AMD and Apple are making local inference do-able on client.

Inference is vastly simpler than training or scientific compute.