| ▲ | simonw 14 hours ago |
| This is pretty recent - the survey they ran (99 respondents) was August 18 to September 23 2025 and the field observations (watching developers for 45 minute then a 30 minute interview, 13 participants) were August 1 to October 3. The models were mostly GPT-5 and Claude Sonnet 4. The study was too early to catch the 5.x Codex or Claude 4.5 models (bar one mention of Sonnet 4.5.) This is notable because a lot of academic papers take 6-12 months to come out, by which time the LLM space has often moved on by an entire model generation. |
|
| ▲ | utopiah 3 hours ago | parent | next [-] |
| > academic papers take 6-12 months to come out, by which time the LLM space has often moved on by an entire model generation. This is a recurring argument which I don't understand. Doesn't it simply mean that whatever conclusion they did was valid then? The research process is about approximating a better description of a phenomenon to understand it. It's not about providing a definitive answer. Being "an entire model generation" behind would be important if fundamental problems, e.g. no more hallucinations, would be solved but if it's going from incremental changes then most likely the conclusions remain correct. Which fundamental change (I don't think labeling newer models as "better" is sufficient) do you believe invalidate their conclusions in this specific context? |
| |
| ▲ | soulofmischief 35 minutes ago | parent [-] | | 2025 has been a wild year for agentic coding models. Cutting-edge models in January 2025 don't hold a candle to cutting edge models in December 2025. Just the jump from Sonnet 3.5 to 3.7 to 4.5, and Opus 4.5 has been pretty massive in terms of holistic reasoning, deep knowledge as well as better procedural and architectural adherence. GPT-5 Pro convinced me to pay $200/mo for an OpenAI subscription. Regular 5.2 models, and 5.2 codex, are leagues better than GPT-4 when it comes to solving problems procedurally, using tools, and deep discussion of scientific, mathematic, philosophical and engineering problems. Models have increasingly longer context, especially some Google models. OpenAI has released very good image models, and great editing-focused image models in general have been released. Predictably better multimodal inference over the short term is unlocking many cool near-term possibilities. Additionally, we have seen some incredible open source and open weight models released this year. Some fully commercially viable without restriction. And more and more smaller TTS/STT projects are in active development, with a few notable releases this year. Honestly, the landscape at the end of the year is impressive. There has been great work all over the place, almost too much to keep up with. I'm very interested in the Genie models and a few others. For an idea: At the beginning of the year, I was mildly successful getting at coding models to make changes in some of my codebases, but the more esoteric problems were out of reach. Progress in general was deliberate and required a lot of manual intervention. By comparison, in the last week I've prototyped six applications at levels that would take me days to weeks individually, often developing multiple at the same time, monitoring agentic workflows and intervening only when necessary, relying on long preproduction phases with architectural discussions and development of documentation, requirements, SDDs... and detailed code review and refactoring processes to ensure adherence to constraints. I'm morphing from a very busy solo developer into a very busy product manager. |
|
|
| ▲ | ActionHank 6 hours ago | parent | prev | next [-] |
| For what it’s worth I know this is likely intended to read as the new generation of models will somehow better than any paper will be able to gauge, that hasn’t been my experience. Results are getting worse and less accurate, hell, I even had Claude drop some Chinese into a response out of the blue one day. |
| |
| ▲ | danielbln an hour ago | parent | next [-] | | I can absolutely not corroborate this, Opus 4.5 has been nothing but stellar. | |
| ▲ | mannycalavera42 an hour ago | parent | prev [-] | | same here.
While getting a commandline for ffmpeg instead of giving me the option "soft-knee" it used "soft-膝" (where 膝 is the chinese for knee)
was easy to spot and figure out but still... pretty rubbishy
¯ \ _ (ツ) _ / ¯ |
|
|
| ▲ | reactordev 12 hours ago | parent | prev | next [-] |
| I knew in October the game had changed. Thanks for keeping us in the know. |
| |
| ▲ | mikasisiki 3 hours ago | parent [-] | | I'm not sure what you mean by “the game has changed.” If you’re referring to Opus 4.5, it’s somewhat better, but it’s far from game-changing. |
|
|
| ▲ | bbor 7 hours ago | parent | prev | next [-] |
| I’m glad someone else noticed the time frames — turns out the lead author here has published 28 distinct preprints in the past 60 days, almost all of which are marked as being officially published already/soon. Certainly some scientists are just absurdly efficient and all 28 involved teams, but that’s still a lot. Personally speaking, this gives me second thoughts about their dedication to truly accurately measuring something as notoriously tricky as corporate SWE performance. Any number of cut corners in a novel & empirical study like this would be hard to notice from the final product, especially for casual readers…TBH, the clickbait title doesn’t help either! I don’t have a specific critique on why 4 months is definitely too short to do it right tho. Just vibe-reviewing, I guess ;) |
| |
|
| ▲ | joenot443 14 hours ago | parent | prev | next [-] |
| Thanks Simon - always quick on the draw. Off your intuition, do you think the same study with Codex 5.2 and Opus 4.5 would see even better results? |
| |
| ▲ | simonw 14 hours ago | parent [-] | | Depends on the participants. If they're cutting-edge LLM users then yes, I think so. If they continue to use LLMs like they would have back in the first half of 2025 I'm not sure if a difference would be noticeable. | | |
| ▲ | mkozlows 13 hours ago | parent | next [-] | | I'm not remotely cutting edge (just switched from Cursor to Codex CLI, have no fancy tooling infrastructure, am not even vaguely considering git worktrees as a means of working), but Opus 4.5 and 5.2 Codex are both so clearly more competent than previous models that I've started just telling them to do high-level things rather than trying to break things down and give them subtasks. If people are really set in their ways, maybe they won't try anything beyond what old models can do, and won't notice a difference, but who's had time to get set in their ways with this stuff? | | |
| ▲ | christophilus 12 hours ago | parent | next [-] | | I mostly agree, but today, Opus 4.5 via Claude code did something pretty dumb stuff in my codebase— N queries where one would do, deep array comparison where a reference equality check would suffice, very complex web of nested conditionals which a competent developer would have never written, some edge cases where the backend endpoints didn’t properly verify user permissions before overwriting data, etc. It’s still hit or miss. The product “worked” when I tested it as a black box, but the code had a lot of rot in it already. Maybe that stuff no longer matters. Maybe it does. Time will tell. | | |
| ▲ | ManuelKiessling 12 hours ago | parent | next [-] | | As someone who’s responsible for some very clean codebases and some codebases that grew over many years, warts and all, I always wonder if being subjected to large amounts of not-exactly-wonderful code has the same effect on an LLM that it arguably also has on human developers (myself included occasionally): that they subconsciously lower their normally high bar for quality a bit, as in „well there‘s quite some smells here, let’s go a bit with the flow and not overdo the quality“. | |
| ▲ | remich 12 hours ago | parent | prev | next [-] | | I have had a lot of success lately when working with Opus 4.5 using both the Beads task tracking system and the array of skills under the umbrella of Bad Dave's Robot Army. I don't have a link handy, but you should be able to find it on GitHub. I use the specialized skills for different review tasks (like Architecture Review, Performance Review, Security Review, etc.) on every completed task in addition to my own manual review, and I find that that helps to keep things from getting out of hand. | |
| ▲ | mkozlows 11 hours ago | parent | prev [-] | | I don't think they generally one-shot the tasks; but they do them well enough that you can review the diff and make requests for changes and have it succeed in a good outcome more quickly than if you were spoon-feeding it little tasks and checking them as you go (as you used to have to do). |
| |
| ▲ | nineteen999 6 hours ago | parent | prev [-] | | Also not a cutting edge user, but do run my own LLM's at home and have been spending a lot of time with Claude CLI last few months. It's fine if you want Claude to design your API's without any input, but you'll have less control and when you dig down into the weeds you'll realise it's created a mess. I like to take both a top-down and bottoms-up approach - design the low level API with Claude fleshing out how it's supposed to work, then design the high level functionality, and then tell it to stop implementing when it hits a problem reconciling the two and the lower level API needs revision. At least for things I'd like to stand the test of time, if its just a throwaway script or tool I care much less as long as it gets the job done. |
| |
| ▲ | drbojingle 12 hours ago | parent | prev [-] | | What's the difference between using llms now vs the first half of 2025 among the best users? | | |
| ▲ | simonw 12 hours ago | parent | next [-] | | Coding agents and much better models. Claude Code or Codex CLI plus Claude Opus 4.5 or GPT 5.2 Codex. The latest models and harnesses can crunch on difficult problems for hours at a time and get to working solutions. Nothing could do that back in ~March. I shared some examples in this comment: https://news.ycombinator.com/item?id=46436885 | | |
| ▲ | William_BB 11 hours ago | parent | next [-] | | Ok I will bite. Every single example you gave is in a hobby project territory. Relatively self-contained, maintainable by 3-4 devs max, within 1k-10k lines of code. I've been successfully using coding agents to create such projects for the past year and it's great, I love it. However, lots of us here work on codebases that are 100x, 1000x the size of these projects you and Karpathy are talking about. Years of domain specific code. From personal experience, coding agents simply don't work at that scale the same way they do for hobby projects. Over the past year or two, I did not see any significant improvement from any of the newest models. Building a slightly bigger hobby project is not even close to making these agents work at industrial scale. | | |
| ▲ | rjzzleep 6 hours ago | parent | next [-] | | I think that in general there is a big difference between javascript/typescript projects big or small and other projects that actually address a specific project domain. These two are not the same. The same claude code agent can create a lot of parts of a function web project, but will struggle providing anything functional but a base frame for you to build on if you were to create a new SoC support in some drone firmware. The problem is that everyone working on those more serious projects knows that and treats LLMs accordingly, but the people that come from the web space come in with the expectation that they can replicate the success they have in their domain just as easily, when oftentimes you need to have some domain knowledge. I think the difference simply comes down to the sheer volume of training material, i.e. web projects on github. Most "engineers" are actually just framework consumers and within those frameworks llms work great. | |
| ▲ | simonw 11 hours ago | parent | prev | next [-] | | Most of the stuff I'm talking about here came out in November. There hasn't been much time for professional teams to build new things with it yet, especially given the holidays! | | |
| ▲ | qweiopqweiop 2 hours ago | parent [-] | | For what it's worth, I'm working with it on a huge professional monorepo, and the difference was also stark. |
| |
| ▲ | reactordev 6 hours ago | parent | prev | next [-] | | For what it’s worth, I have Claude coding away at Unreal Engine codebase. That’s a pretty large c++ codebase and it’s having no trouble at all. Just a cool several million lines of C++ lovely. | |
| ▲ | drbojingle 7 hours ago | parent | prev | next [-] | | Everything is made of smaller parts. I'd like to think we can sub divide a code base into isolated modules at least. | | |
| ▲ | devin 6 hours ago | parent [-] | | In the real world, not all problems decompose nicely. In fact, I think it may be the case that the problems we actually get paid to solve with code are often of this type. |
| |
| ▲ | baq 10 hours ago | parent | prev | next [-] | | That’s right, but it also hints at a solution: split big code bases into parts that are roughly the size of a big hobby project. You’ll need to write some docs to be effective at it, which also helps agents. CICD means continuous integration continuous documentation now. | | |
| ▲ | bccdee 9 hours ago | parent | next [-] | | Splitting one big codebase into 100 microservices always seems tempting, except that big codebases already exist in modules and that doesn't stop one module's concerns from polluting the other modules' code. What you've got now is 100 different repositories that all depend on each other, get deployed separately, and can only be tested with some awful docker-compose setup. Frankly, given the impedance of hopping back and forth between repos separated by APIs, I'd expect an LLM to do far worse in a microservice ecosystem than in an equivalent monolith. | |
| ▲ | majormajor 10 hours ago | parent | prev | next [-] | | I wonder if anyone has tried this thing before, like... micro-projects or such... ;) | |
| ▲ | rjzzleep 6 hours ago | parent | prev [-] | | It's not the size that's the issue, it's the domain that is. It's tempting to say that adding drivers to Linux is hard because Linux is big, but that's not the issue. |
| |
| ▲ | 11 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | oooyay 8 hours ago | parent | prev [-] | | I worked at Slack earlier this year. Slack adopted Cursor as an option in December of 2024 if memory serves correctly. I had just had a project cut due to a lot of unfortunate reasons so I was working on it with one other engineer. It was a rewrite of a massive and old Python code base that ran Slack's internal service catalog. The only reason I was able to finish rewrites of the backend, frontend, and build an SLO sub-system is because of coding agents. Up until December I'd been doing that entire rewrite through sixteen hour days and just pure sweat equity. Again, that codebase is millions of lines of Python code and frankly the agents weren't as good then as they are now. I carefully used globbing rules in Cursor to navigate coding and testing standards. I had a rule that functioned as how people use agents.md now, which was put on every prompt. That honestly got me a lot more mileage than you'd think. A lot of the outcomes of these tools are how you use them and how good your developer experience is. If professional software engineers have to think about how to navigate and iterate on different parts of your code, then an LLM will find it doubly difficult. |
| |
| ▲ | epolanski 2 hours ago | parent | prev | next [-] | | Cool, but most developers do mundane stuff like glueing APIs and implementing business logic, which require oversight and review. Those crunching hard problems will still review what's produced in search of issues. | | |
| ▲ | generic92034 40 minutes ago | parent [-] | | What is (in general) mundane about business logic? This can be highly complex, with deep process integration all over your modules. |
| |
| ▲ | drbojingle 7 hours ago | parent | prev | next [-] | | Are there techniques though? Tech pairing? Something we know now that we didn't then? Or just better models? | | |
| ▲ | simonw 6 hours ago | parent [-] | | Lots of technique stuff. A common observation among LLM nerds is that if the models stopped being improved and froze in time for a year we could still spend all twelve months discovering new capabilities and use-cases for the models we already have. |
| |
| ▲ | mkozlows 11 hours ago | parent | prev [-] | | I was going back and looking at timelines, and was shocked to realize that Claude Code and Cursor's default-to-agentic-mode changes both came out in late February. Essentially the entire history of "mainstream" agentic coding is ten months old. (This helps me understand better the people who are confused/annoyed/dismissive about it, because I remember how dismissive people were about Node, about Docker, about Postgres, about Linux when those things were new too. So many arguments where people would passionately talk about all those things were irredeemably stupid and only suitable for toy/hobby projects.) | | |
| ▲ | HarHarVeryFunny 7 hours ago | parent [-] | | The entire history of RL-trained "reasoning models" from o1 to DeepSeek_R1 is basically just a year old! |
|
| |
| ▲ | 7 hours ago | parent | prev [-] | | [deleted] |
|
|
|
|
| ▲ | dheera 14 hours ago | parent | prev | next [-] |
| > academic papers take 6-12 months to come out It takes about 6 months to figure out how to get LaTeX to position figures where you want them, and then another 6 months to fight with reviewers |
| |
|
| ▲ | trq126154 10 hours ago | parent | prev [-] |
| [flagged] |