Remix.run Logo
AI bug reports went from junk to legit overnight, says Linux kernel czar(theregister.com)
56 points by amarant a day ago | 4 comments
dag100 19 hours ago | parent [-]

I think there was a major jump in AI capabilities from Anthropic and OpenAI between the end of 2025 and the start of 2026 that made them far more reliable at programming correctly. I wonder what changed in the secret sauce.

zar1048576 15 hours ago | parent | next [-]

I suspect the big jump came from the release of Claude Opus 4.5/4.6 and GPT-5.x-Codex between Nov ‘25 and Feb ‘26, which were trained with heavy reinforcement learning on long coding projects, rewarding only real success (like running code, using terminals, self-fixing bugs, and passing tests) while adding better memory for huge codebases and extra coding-specific training.

skeledrew 15 hours ago | parent | prev [-]

Nothing drastic I'd say. It's a continuous stream of small improvements just accumulating with each release, and someone just noticed a few releases away from a previous publicized-bad-capabilities release that there's major improvement between those points. So it looks like something major only due to the spacing between the capability surveys on the release timeline.

halJordan 8 hours ago | parent [-]

It was drastic and immediate. It switched with the latest versions of opus and codex. It's why openclaw is popping off. The models became usable.