Remix.run Logo
mips_avatar 6 days ago

I don't know what democratizing AI means, AWS doesn't have the GPU infrastructure to host inference or training on a large scale.

lizknope 6 days ago | parent | next [-]

I started this part of the thread and mentioned Trainium but the person you replied to gave a link. Follow that and you can see Amazon's chips that they designed.

Amazon wants people to move away from Nvidia GPUs and to their own custom chips.

https://aws.amazon.com/ai/machine-learning/inferentia/

https://aws.amazon.com/ai/machine-learning/trainium/

mips_avatar 6 days ago | parent [-]

TBH I was just going off of that I've heard AWS is a terrible place to get h100 clusters at scale. And for the training I was looking at we didn't really want to consider going off CUDA.

PartiallyTyped 6 days ago | parent | prev [-]

Huh? That’s quite the assertion. They provide the infrastructure for Anthropic, so if that’s not large scale idk what is.

ZeroCool2u 6 days ago | parent [-]

They have to use GCP as well, which is arguably a strong indictment of their experience with AWS. Coincidentally, this aligns with my experience trying to train on AWS.