Remix.run Logo
epistasis 13 hours ago

Claude got a looooot more buy in with a lot of privacy-concerned orgs I work with because they could access it through their "trusted" intermediate Amazon. OpenAI has been banned and is not trusted. I'm not sure that I agree with these orgs' legal teams' assessments, but they definitely read the terms of service far closer than I did.

We will see if this changes the equation, but it feels like OpenAI is pretty far behind and playing catch up on all fronts. Though to be honest, "pretty far behind" is like 2-8 weeks in the AI world, so it may not matter a ton, it's mostly perception. And for me and my information bubble, perception of OpenAI is rock-bottom due to Sam Altman. From appearing unethical to appearing unhinged with demands from fabs and everything else, I'm not a fan.

fny 10 hours ago | parent | next [-]

You can sign ZDR agreements with any of the major LLM providers. Using AWS alone is also not sufficient. Even though AWS is running the model, you need to contact them for proper ZDR.[0]

[0]: https://platform.claude.com/docs/en/build-with-claude/claude...

PretzelJudge 9 hours ago | parent [-]

Helpful link. Thank you.

I think that when people are worried about ZDR, what they really worry about is data governance. From what I’ve seen there’s a general distrust of OpenAI. AWS may keep your data around (without formal ZDR) but the concern of governance (using your data to train without your consent) seems like it would be much lower, because any breach of contract at AWS would have potential to destroy trust in what’s already a massively profitable company, so the incentives just aren’t there.

I’m not claiming OpenAI is training on API data. Just that they don’t have as strong of an incentive not to as AWS.

donavanm 8 hours ago | parent [-]

AWS took limited data retention very seriously starting around 2015. Before that it was reasonable controls and a strong culture preserving customer privacy. After 2015ish they started implementing strong controls, to where service team members cant feasibly access customer data in the service they run, and account termination starts a legit data removal process (“GDPR compliance”). They also take the terms of service and user agreement (“your data” etc) very seriously in general.

bg24 10 hours ago | parent | prev | next [-]

While Anthopic has the best model and a focussed (no disturbance, lawsuits) leadership, they got a lot of enterprise access due to AWS. It is mutual no doubt, with both sides benefitting. The culture of feedback loop of AWS customers would have helped them in getting to enterprise-ready faster. Just my hypothesis.

stingraycharles 6 hours ago | parent [-]

But Azure is just as big, if not equally big, in Enterprise. The argument that Azure didn’t give them enough access within enterprises doesn’t make sense to me.

dannyw an hour ago | parent [-]

Microsoft Azure and OpenAI have been basically hating each other since the beginning, since their incentives were completely misaligned.

consumer451 11 hours ago | parent | prev | next [-]

Legally, SLA, and data concern-wise, is this any better than OpenAI on Azure? That has been around for a while.

PretzelJudge 9 hours ago | parent | next [-]

By default you can’t access the latest OpenAI models unless you request access. We requested access for a very straightforward use case and never got it. We switched to Anthropic and Bedrock for that reason.

9 hours ago | parent [-]
[deleted]
UqWBcuFx6NV4r 10 hours ago | parent | prev [-]

You don’t have to use Azure.

outside1234 13 hours ago | parent | prev | next [-]

The thing they are really wildly behind on is a business model. They are losing wild amounts of money per customer and it is hard to see how the competitive situation is going to allow them to fix that.

echelon 13 hours ago | parent [-]

Given the scaling hurdles Claude Code / Opus is having, those Anthropic customers might leave to Codex. I'm _this_ close.

jwilliams 13 hours ago | parent | next [-]

Codex is pretty good. Its friction to switch but I think it’s sensible being across multiple AI toolchains.

try-working 9 hours ago | parent | next [-]

No friction in switching coding models.

NamlchakKhandro 12 hours ago | parent | prev | next [-]

Pi mono.

Nuff said

unrelat3d 12 hours ago | parent [-]

This is what I use now after testing others through 2025

It has the most "UNIX" feel of a simple app that you compose the just right flow from and nothing more

felixgallo 10 hours ago | parent | prev [-]

Thing is, if you're using Codex, you're supporting Sam Altman and the idea of Sam Altmans, in the same way that if you use X or buy a Tesla, you're supporting Elon Musk and the idea of Elon Musks. That's a pretty big tax to factor into the usage of such products. If you even got 5% better coding results, would that make up for the future they're trying to build?

sheeshkebab 8 hours ago | parent | next [-]

The more Dario talks the less I want to have anything to do with his wares.

xienze 10 hours ago | parent | prev [-]

Dario wants to replace you with AI as well. Don't be fooled into thinking he's your friend because he said no to Trump that one time. I'll remind you that Musk used to be the left's hero not too long ago.

felixgallo 8 hours ago | parent [-]

I'm in the "AI could be good for humanity" camp, and in this camp, we believe that Dario/Anthropic is a radically better choice going forward than the alternatives at this moment. In this camp we are not 'fooled into thinking he's our friend because he said no to Trump that one time', we are evaluating the entire set of available information and figuring that Anthropic's the best bet.

As for Musk ever being "the left"'s "hero" -- that's amazing, that's what Pauli would call 'not even wrong'.

epistasis 13 hours ago | parent | prev | next [-]

I'm getting pretty close too, but I wouldn't switch to Codex I'd switch to one of the open agents that can use any backing LLM. My reasoning is that if I'm willing to pay the cost of the small changes in usage, I might as well switch to an open source agent that I can add my own convenience features to, like remote sessions and phone-based operation.

jfkimmes 13 hours ago | parent [-]

Codex is open source and allows any model to be configured.

epistasis 13 hours ago | parent | next [-]

Many thanks for that info!

bossyTeacher 12 hours ago | parent | prev | next [-]

Why Codex when you can use something that hasn't been touched by Sam Altman? Surely, your drive to get the very best model isn't stronger than your sense of ethics?

NamlchakKhandro 12 hours ago | parent | prev [-]

Codex is not open source. And it's not even that extensible

milkshakes 12 hours ago | parent | next [-]

https://github.com/openai/codex

12 hours ago | parent | prev [-]
[deleted]
ribosometronome 13 hours ago | parent | prev [-]

What would be subscription customers, no? Rather than Bedrock or per-api customers? Many of the companies running on Bedrock or by-use have per day limits above the max monthly subscription costs.

johnbarron 11 hours ago | parent | prev | next [-]

It just not about AWS being some "trusted intermediary"... it's that the model runs inside the customer own AWS account under a different contract. AWS explicitly states inputs/outputs are not shared with model providers and are not used to train base models [1]

And for OpenAI, there is a May 2025 preservation order in NYT v. OpenAI. The court is forcing OpenAI to retain ChatGPT output logs indefinitely, including chats users have deleted that would normally be purged within 30 days [2]. That makes it a non starter for HIPAA/GDPR bound orgs.

[1] https://aws.amazon.com/bedrock/faqs/

[2] https://openai.com/index/response-to-nyt-data-demands/

hn_throwaway_99 11 hours ago | parent [-]

I'm confused, your own #2 link says that Open AI is not bound to store output logs indefinitely going forward:

> Update on October 22, 2025:

> After months of litigation, we are no longer under a legal order to retain consumer ChatGPT and API content indefinitely. Our obligations under the earlier order ended on September 26, 2025.

> We’ve returned to our standard data retention practices :

> Deleted ChatGPT conversations and Temporary Chats will be automatically deleted from our systems within 30 days (opens in a new window).

> API data will also be automatically deleted after 30 days.

TZubiri 7 hours ago | parent | prev | next [-]

It's like Coca-Cola being banned at a school, and then Pepsi getting some contracts with the cafeteria because of it.

giancarlostoro 13 hours ago | parent | prev [-]

They're also not focused exclusively only on building an LLM, they have video and image generation too. Anthropic has one single focus, and this is why they are usually at the very top in the SWE benchmarks.

phillipcarter 13 hours ago | parent | next [-]

Isn't it the case that OpenAI and Anthropic regularly just swap for whoever is at the top of the latest benchmarks? They're also so close in scores that it's effectively a wash anyways.

What OP is referring to is Anthropic aligning with corporate terms and conditions early, positioning themselves to be effectively resold by AWS rather than requiring orgs to procure them directly. This is huge in the enterprise world because the processes to get broad approval are generally far smaller and shorter for "just another AWS service" compared to a whole new vendor.

djtriptych 11 hours ago | parent | next [-]

OpenAI did teh same thing with Microsoft/Azure though.

Grimblewald 11 hours ago | parent | prev [-]

Isn't it an open secret that benchmarks are largly irrelevant at this point? Why else we do all have a personalized test battery for new models? That said i've stopped testing chatgpt entierly. Its still ok but is beaten by local models and it gets thrashed by non oai frontier providers. I get the history, but holding up oai outputs as equivallent is lile comparing yahoo to google post yahoo's collapse in search domains.

Oai language models are largly irrelevant at this point imo.

epistasis 13 hours ago | parent | prev | next [-]

IMHO the benchmarks aren't useful, and ranking among the frontier models is mostly noise. The extra features around the coding agent have a much bigger impact on productivity than having to provide slightly more specification and guidance to the models; a 90% success rate versus a 92% success rate on the tasks I ask it to do is far more influenced by what I say than what the model is capable of.

DrewADesign 11 hours ago | parent | prev | next [-]

Didn’t they say Sora will only be used to internally create training data? Integrated image generation seems more in the neat feature category than some fundamental advantage, but maybe someone has use cases I haven’t considered.

hn_throwaway_99 11 hours ago | parent | prev [-]

Open AI is killing Sora though, so it looks like they are looking at Anthropic's playbook of focusing on enterprise use cases and seeing that it's more profitable.

dannyw an hour ago | parent [-]

But then they released gpt-image-2, which is clearly SOTA.