| ▲ | simonw 14 hours ago |
| Depends on the participants. If they're cutting-edge LLM users then yes, I think so. If they continue to use LLMs like they would have back in the first half of 2025 I'm not sure if a difference would be noticeable. |
|
| ▲ | mkozlows 13 hours ago | parent | next [-] |
| I'm not remotely cutting edge (just switched from Cursor to Codex CLI, have no fancy tooling infrastructure, am not even vaguely considering git worktrees as a means of working), but Opus 4.5 and 5.2 Codex are both so clearly more competent than previous models that I've started just telling them to do high-level things rather than trying to break things down and give them subtasks. If people are really set in their ways, maybe they won't try anything beyond what old models can do, and won't notice a difference, but who's had time to get set in their ways with this stuff? |
| |
| ▲ | christophilus 12 hours ago | parent | next [-] | | I mostly agree, but today, Opus 4.5 via Claude code did something pretty dumb stuff in my codebase— N queries where one would do, deep array comparison where a reference equality check would suffice, very complex web of nested conditionals which a competent developer would have never written, some edge cases where the backend endpoints didn’t properly verify user permissions before overwriting data, etc. It’s still hit or miss. The product “worked” when I tested it as a black box, but the code had a lot of rot in it already. Maybe that stuff no longer matters. Maybe it does. Time will tell. | | |
| ▲ | ManuelKiessling 12 hours ago | parent | next [-] | | As someone who’s responsible for some very clean codebases and some codebases that grew over many years, warts and all, I always wonder if being subjected to large amounts of not-exactly-wonderful code has the same effect on an LLM that it arguably also has on human developers (myself included occasionally): that they subconsciously lower their normally high bar for quality a bit, as in „well there‘s quite some smells here, let’s go a bit with the flow and not overdo the quality“. | |
| ▲ | remich 12 hours ago | parent | prev | next [-] | | I have had a lot of success lately when working with Opus 4.5 using both the Beads task tracking system and the array of skills under the umbrella of Bad Dave's Robot Army. I don't have a link handy, but you should be able to find it on GitHub. I use the specialized skills for different review tasks (like Architecture Review, Performance Review, Security Review, etc.) on every completed task in addition to my own manual review, and I find that that helps to keep things from getting out of hand. | |
| ▲ | mkozlows 11 hours ago | parent | prev [-] | | I don't think they generally one-shot the tasks; but they do them well enough that you can review the diff and make requests for changes and have it succeed in a good outcome more quickly than if you were spoon-feeding it little tasks and checking them as you go (as you used to have to do). |
| |
| ▲ | nineteen999 6 hours ago | parent | prev [-] | | Also not a cutting edge user, but do run my own LLM's at home and have been spending a lot of time with Claude CLI last few months. It's fine if you want Claude to design your API's without any input, but you'll have less control and when you dig down into the weeds you'll realise it's created a mess. I like to take both a top-down and bottoms-up approach - design the low level API with Claude fleshing out how it's supposed to work, then design the high level functionality, and then tell it to stop implementing when it hits a problem reconciling the two and the lower level API needs revision. At least for things I'd like to stand the test of time, if its just a throwaway script or tool I care much less as long as it gets the job done. |
|
|
| ▲ | drbojingle 12 hours ago | parent | prev [-] |
| What's the difference between using llms now vs the first half of 2025 among the best users? |
| |
| ▲ | simonw 12 hours ago | parent | next [-] | | Coding agents and much better models. Claude Code or Codex CLI plus Claude Opus 4.5 or GPT 5.2 Codex. The latest models and harnesses can crunch on difficult problems for hours at a time and get to working solutions. Nothing could do that back in ~March. I shared some examples in this comment: https://news.ycombinator.com/item?id=46436885 | | |
| ▲ | William_BB 11 hours ago | parent | next [-] | | Ok I will bite. Every single example you gave is in a hobby project territory. Relatively self-contained, maintainable by 3-4 devs max, within 1k-10k lines of code. I've been successfully using coding agents to create such projects for the past year and it's great, I love it. However, lots of us here work on codebases that are 100x, 1000x the size of these projects you and Karpathy are talking about. Years of domain specific code. From personal experience, coding agents simply don't work at that scale the same way they do for hobby projects. Over the past year or two, I did not see any significant improvement from any of the newest models. Building a slightly bigger hobby project is not even close to making these agents work at industrial scale. | | |
| ▲ | rjzzleep 6 hours ago | parent | next [-] | | I think that in general there is a big difference between javascript/typescript projects big or small and other projects that actually address a specific project domain. These two are not the same. The same claude code agent can create a lot of parts of a function web project, but will struggle providing anything functional but a base frame for you to build on if you were to create a new SoC support in some drone firmware. The problem is that everyone working on those more serious projects knows that and treats LLMs accordingly, but the people that come from the web space come in with the expectation that they can replicate the success they have in their domain just as easily, when oftentimes you need to have some domain knowledge. I think the difference simply comes down to the sheer volume of training material, i.e. web projects on github. Most "engineers" are actually just framework consumers and within those frameworks llms work great. | |
| ▲ | simonw 11 hours ago | parent | prev | next [-] | | Most of the stuff I'm talking about here came out in November. There hasn't been much time for professional teams to build new things with it yet, especially given the holidays! | | |
| ▲ | qweiopqweiop 2 hours ago | parent [-] | | For what it's worth, I'm working with it on a huge professional monorepo, and the difference was also stark. |
| |
| ▲ | reactordev 6 hours ago | parent | prev | next [-] | | For what it’s worth, I have Claude coding away at Unreal Engine codebase. That’s a pretty large c++ codebase and it’s having no trouble at all. Just a cool several million lines of C++ lovely. | |
| ▲ | drbojingle 7 hours ago | parent | prev | next [-] | | Everything is made of smaller parts. I'd like to think we can sub divide a code base into isolated modules at least. | | |
| ▲ | devin 6 hours ago | parent [-] | | In the real world, not all problems decompose nicely. In fact, I think it may be the case that the problems we actually get paid to solve with code are often of this type. |
| |
| ▲ | baq 10 hours ago | parent | prev | next [-] | | That’s right, but it also hints at a solution: split big code bases into parts that are roughly the size of a big hobby project. You’ll need to write some docs to be effective at it, which also helps agents. CICD means continuous integration continuous documentation now. | | |
| ▲ | bccdee 9 hours ago | parent | next [-] | | Splitting one big codebase into 100 microservices always seems tempting, except that big codebases already exist in modules and that doesn't stop one module's concerns from polluting the other modules' code. What you've got now is 100 different repositories that all depend on each other, get deployed separately, and can only be tested with some awful docker-compose setup. Frankly, given the impedance of hopping back and forth between repos separated by APIs, I'd expect an LLM to do far worse in a microservice ecosystem than in an equivalent monolith. | |
| ▲ | majormajor 10 hours ago | parent | prev | next [-] | | I wonder if anyone has tried this thing before, like... micro-projects or such... ;) | |
| ▲ | rjzzleep 6 hours ago | parent | prev [-] | | It's not the size that's the issue, it's the domain that is. It's tempting to say that adding drivers to Linux is hard because Linux is big, but that's not the issue. |
| |
| ▲ | 11 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | oooyay 8 hours ago | parent | prev [-] | | I worked at Slack earlier this year. Slack adopted Cursor as an option in December of 2024 if memory serves correctly. I had just had a project cut due to a lot of unfortunate reasons so I was working on it with one other engineer. It was a rewrite of a massive and old Python code base that ran Slack's internal service catalog. The only reason I was able to finish rewrites of the backend, frontend, and build an SLO sub-system is because of coding agents. Up until December I'd been doing that entire rewrite through sixteen hour days and just pure sweat equity. Again, that codebase is millions of lines of Python code and frankly the agents weren't as good then as they are now. I carefully used globbing rules in Cursor to navigate coding and testing standards. I had a rule that functioned as how people use agents.md now, which was put on every prompt. That honestly got me a lot more mileage than you'd think. A lot of the outcomes of these tools are how you use them and how good your developer experience is. If professional software engineers have to think about how to navigate and iterate on different parts of your code, then an LLM will find it doubly difficult. |
| |
| ▲ | epolanski 2 hours ago | parent | prev | next [-] | | Cool, but most developers do mundane stuff like glueing APIs and implementing business logic, which require oversight and review. Those crunching hard problems will still review what's produced in search of issues. | | |
| ▲ | generic92034 39 minutes ago | parent [-] | | What is (in general) mundane about business logic? This can be highly complex, with deep process integration all over your modules. |
| |
| ▲ | drbojingle 7 hours ago | parent | prev | next [-] | | Are there techniques though? Tech pairing? Something we know now that we didn't then? Or just better models? | | |
| ▲ | simonw 6 hours ago | parent [-] | | Lots of technique stuff. A common observation among LLM nerds is that if the models stopped being improved and froze in time for a year we could still spend all twelve months discovering new capabilities and use-cases for the models we already have. |
| |
| ▲ | mkozlows 11 hours ago | parent | prev [-] | | I was going back and looking at timelines, and was shocked to realize that Claude Code and Cursor's default-to-agentic-mode changes both came out in late February. Essentially the entire history of "mainstream" agentic coding is ten months old. (This helps me understand better the people who are confused/annoyed/dismissive about it, because I remember how dismissive people were about Node, about Docker, about Postgres, about Linux when those things were new too. So many arguments where people would passionately talk about all those things were irredeemably stupid and only suitable for toy/hobby projects.) | | |
| ▲ | HarHarVeryFunny 7 hours ago | parent [-] | | The entire history of RL-trained "reasoning models" from o1 to DeepSeek_R1 is basically just a year old! |
|
| |
| ▲ | 7 hours ago | parent | prev [-] | | [deleted] |
|