Remix.run Logo
simonw 10 hours ago

That's not how modern LLMs are built. The days of dumping everything on the internet into the training data and crossing your fingers are long past.

Anthropic and OpenAI spent most of 2025 focusing almost expensively on improving the coding abilities of their models, through reinforcement learning combined with additional expert curation of training data.

input_sh 8 hours ago | parent [-]

Silly old me, how could've I forgotten about such drastic improvements between say Sonnet 3.7 and Sonnet 4.6. It's 500x better now!

Thank you for teaching me, AI understander. You're definitely not detached from reality one bit. It's me, obviously.

simonw 8 hours ago | parent [-]

Have you seen how many people are talking about the November 2025 inflection point, where the models ticked over from being good at running coding agents to being really good at it?