| ▲ | jatins 2 hours ago | |||||||||||||
Experience depends on which FAANG it is. Amazon for example doesn't allow Claude Code or Codex so you are stuck with whatever internal tool they have Meta, despite competing with these, is open to let their devs use better off the shelf tools. | ||||||||||||||
| ▲ | konaraddi an hour ago | parent | next [-] | |||||||||||||
I work at aws and generally use Claude Opus 4.6 1M with Kiro (aws’s public competitor to Claude Code). My experience is positive. Kiro writes most of my code. My complaints: 1. Degraded quality over longer context window usage. I have to think about managing context and agents instead of focusing solely on the task. 2. It’s slow (when it’s “thinking”). Especially when it’s tasked with something simple (e.g., I could ask Claude Opus to commit code and submit for review but it’s just faster if I run the commands myself and I don’t want to have to think about conditionally switching to Haiku / faster models mid task execution). 3. It often requires a lot of upfront planning and feedback loop set up to the extent that sometimes I wonder if it would’ve been faster if I did it myself. A smarter model would be great but there are bigger productivity gains to be had with a good set up, a faster model, and abstracting away the need to think about agents or context usage. I’m still figuring out a good set up. Something with the speed of Haiku with the reasoning of Opus without the overhead of having to think about the management of agents or context would be sweet. | ||||||||||||||
| ||||||||||||||
| ▲ | j3k3 2 hours ago | parent | prev [-] | |||||||||||||
Meta is doing something healthy - signalling that it is behind with its LLM efforts. Nothing wrong with that. | ||||||||||||||