Remix.run Logo
icpmacdo a day ago

"It feels like these new models are no longer making order of magnitude jumps, but are instead into the long tail of incremental improvements. It seems like we might be close to maxing out what the current iteration of LLMs can accomplish and we're into the diminishing returns phase."

SWE bench from ~30-40% to ~70-80% this year

elcritch 21 hours ago | parent | next [-]

Yet despite this all the LLMS I've tried struggle to scale beyond much more than a single module. They're vast improvements on that test perhaps, but in real life they still struggle to be coherent over larger projects and scales.

bckr 9 hours ago | parent | next [-]

> struggle to scale beyond much more than a single module

Yes. You must guide coding agents at the level of modules and above. In fact, you have to know good coding patterns and make these patterns explicit.

Claude 4 won’t use uv, pytest, pydantic, mypy, classes, small methods, and small files unless you tell it to.

Once you tell it to, it will do a fantastic job generating well-structured, type-checked Python.

viraptor 21 hours ago | parent | prev [-]

Those are different kind of issues. Improving the quality of actions is what we're seeing here. Then for the larger projects/contexts the leaders will have to battle it out between the improved agents, or actually moving to something like RWKV and processing the whole project in one go.

morsecodist 21 hours ago | parent [-]

They may be different kinds of issues but they are the issues that actually matter.

piperswe a day ago | parent | prev | next [-]

How much of that is because the models are optimizing specifically for SWE bench?

icpmacdo a day ago | parent [-]

not that much because its getting better at all benchmarks

21 hours ago | parent [-]
[deleted]
keeeba 18 hours ago | parent | prev | next [-]

https://arxiv.org/abs/2309.08632

avs733 21 hours ago | parent | prev [-]

3% to 40% is a 13x improvement

40% to 80% is a 2x improvement

It’s not that the second leap isn’t impressive, it just doesn’t change your perspective on reality in the same way.

viraptor 21 hours ago | parent | next [-]

Maybe... It will be interesting to see the improvements now compared to other benchmarks. Is 80->90% going to be an incremental fix with minimal impact on the next benchmark (same work but better), or is it going to be an overall 2x improvement on the remaining unsolved cases. (different approach tackling previously missed areas)

It really depends on how that remaining improvement happens. We'll see it soon though - every benchmark nearing 90% is being replaced with something new. SWE-verified is almost dead now.

energy123 21 hours ago | parent | prev | next [-]

80% to 100% would be an even smaller improvement but arguably the most impressive and useful (assuming the benchmark isn't in the training data)

andyferris 21 hours ago | parent | prev [-]

I wouldn’t want to wait ages for Claude Code to fail 60% of the time.

A 20% risk seems more manageable, and the improvements speak to better code and problem solving skills around.