▲ | ProjectArcturis 14 hours ago | |||||||||||||
I'm confused why this is addressed to Azure instead of OpenAI. Isn't Azure just offering a wrapper around chatGPT? That said, I would also love to see some examples or data, instead of just "it's getting worse". | ||||||||||||||
▲ | SBArbeit 13 hours ago | parent | next [-] | |||||||||||||
I know that OpenAI has made computing deals with other companies, and as time goes on, the percentage of inference that they run their models on will shift, but I doubt that much, if any, of that has moved from Microsoft Azure data centers yet, so that's not a reason for difference in model performance. With that said, Microsoft has a different level of responsibility, both to its customers and to its stakeholders, to provide safety than OpenAI or any other frontier provider. That's not a criticism of OpenAI or Anthropic or anyone else, who I believe are all trying their best to provide safe usage. (Well, other than xAI and Grok, for which the lack of safety is a feature, not a bug.) The risk to Microsoft of getting this wrong is simply higher than it is for other companies, and what's why they have a strong focus on Responsible AI (RAI) [1]. I don't know the details, but I have to assume there's a layer of RAI processing on models through Azure OpenAI that's not there for just using OpenAI models directly through the OpenAI API. That layer is valuable for the companies who choose to run their inference through Azure, who also want to maximize safety. I wonder if that's where some of the observed changes are coming from. I hope the commenter posts their proof for further inspection. It would help everyone. | ||||||||||||||
▲ | SubiculumCode 14 hours ago | parent | prev [-] | |||||||||||||
I don't remember where I saw it, but I remember a claim that Azure hosted models performed poorer than those hosted by openAI. | ||||||||||||||
|