| ▲ | giancarlostoro an hour ago | ||||||||||||||||
One interesting thing that Anthropic did was putting their stack on the various cloud providers, I wonder if they'll put it on GCP and Azure next since they've put it into AWS first at a level we have not seen a major AI provider do to date. Your company can have their own Claude stack just like an ELK stack on your cloud, if they can do this for both Azure and GCP then OpenAI has to really catch up. In my eyes I would rather use the AI I can run on my own paid infrastructure, so if there's an outage its isolated, or I could potentially have a different region / DC to fallback on. I'm still surprised that neither Microsoft nor Amazon have made their own models available on their cloud offerings. I guess Microsoft probably does have Phi on there, but it's not front and center, especially with something like Copilot for Devs (seriously Microsoft rebrand that damn thing to be clear what you mean by Copilot!) where they could use the cheaper compute by using something like Phi. | |||||||||||||||||
| ▲ | SubiculumCode an hour ago | parent | next [-] | ||||||||||||||||
The recent deal with SpaceX AI to use their severely underutilized GPU compute is pretty telling to me. Being able to roll out compute is a hardware problem, rolling out good models needs more than compute, it needs good AI engineers. SpaceX, Amazon et al can do hardware very well. AI engineering, maybe not so much. | |||||||||||||||||
| ▲ | NitpickLawyer an hour ago | parent | prev [-] | ||||||||||||||||
Claude is already on Vertex - https://docs.cloud.google.com/vertex-ai/generative-ai/docs/p... | |||||||||||||||||
| |||||||||||||||||