| ▲ | Professional software developers don't vibe, they control(arxiv.org) |
| 163 points by dpflan 13 hours ago | 190 comments |
| |
|
| ▲ | simonw 12 hours ago | parent | next [-] |
| This is pretty recent - the survey they ran (99 respondents) was August 18 to September 23 2025 and the field observations (watching developers for 45 minute then a 30 minute interview, 13 participants) were August 1 to October 3. The models were mostly GPT-5 and Claude Sonnet 4. The study was too early to catch the 5.x Codex or Claude 4.5 models (bar one mention of Sonnet 4.5.) This is notable because a lot of academic papers take 6-12 months to come out, by which time the LLM space has often moved on by an entire model generation. |
| |
| ▲ | utopiah an hour ago | parent | next [-] | | > academic papers take 6-12 months to come out, by which time the LLM space has often moved on by an entire model generation. This is a recurring argument which I don't understand. Doesn't it simply mean that whatever conclusion they did was valid then? The research process is about approximating a better description of a phenomenon to understand it. It's not about providing a definitive answer. Being "an entire model generation" behind would be important if fundamental problems, e.g. no more hallucinations, would be solved but if it's going from incremental changes then most likely the conclusions remain correct. Which fundamental change (I don't think labeling newer models as "better" is sufficient) do you believe invalidate their conclusions in this specific context? | |
| ▲ | ActionHank 4 hours ago | parent | prev | next [-] | | For what it’s worth I know this is likely intended to read as the new generation of models will somehow better than any paper will be able to gauge, that hasn’t been my experience. Results are getting worse and less accurate, hell, I even had Claude drop some Chinese into a response out of the blue one day. | |
| ▲ | reactordev 10 hours ago | parent | prev | next [-] | | I knew in October the game had changed. Thanks for keeping us in the know. | | |
| ▲ | mikasisiki 44 minutes ago | parent [-] | | I'm not sure what you mean by “the game has changed.” If you’re referring to Opus 4.5, it’s somewhat better, but it’s far from game-changing. |
| |
| ▲ | bbor 5 hours ago | parent | prev | next [-] | | I’m glad someone else noticed the time frames — turns out the lead author here has published 28 distinct preprints in the past 60 days, almost all of which are marked as being officially published already/soon. Certainly some scientists are just absurdly efficient and all 28 involved teams, but that’s still a lot. Personally speaking, this gives me second thoughts about their dedication to truly accurately measuring something as notoriously tricky as corporate SWE performance. Any number of cut corners in a novel & empirical study like this would be hard to notice from the final product, especially for casual readers…TBH, the clickbait title doesn’t help either! I don’t have a specific critique on why 4 months is definitely too short to do it right tho. Just vibe-reviewing, I guess ;) | | | |
| ▲ | joenot443 12 hours ago | parent | prev | next [-] | | Thanks Simon - always quick on the draw. Off your intuition, do you think the same study with Codex 5.2 and Opus 4.5 would see even better results? | | |
| ▲ | simonw 11 hours ago | parent [-] | | Depends on the participants. If they're cutting-edge LLM users then yes, I think so. If they continue to use LLMs like they would have back in the first half of 2025 I'm not sure if a difference would be noticeable. | | |
| ▲ | mkozlows 11 hours ago | parent | next [-] | | I'm not remotely cutting edge (just switched from Cursor to Codex CLI, have no fancy tooling infrastructure, am not even vaguely considering git worktrees as a means of working), but Opus 4.5 and 5.2 Codex are both so clearly more competent than previous models that I've started just telling them to do high-level things rather than trying to break things down and give them subtasks. If people are really set in their ways, maybe they won't try anything beyond what old models can do, and won't notice a difference, but who's had time to get set in their ways with this stuff? | | |
| ▲ | christophilus 10 hours ago | parent | next [-] | | I mostly agree, but today, Opus 4.5 via Claude code did something pretty dumb stuff in my codebase— N queries where one would do, deep array comparison where a reference equality check would suffice, very complex web of nested conditionals which a competent developer would have never written, some edge cases where the backend endpoints didn’t properly verify user permissions before overwriting data, etc. It’s still hit or miss. The product “worked” when I tested it as a black box, but the code had a lot of rot in it already. Maybe that stuff no longer matters. Maybe it does. Time will tell. | | |
| ▲ | ManuelKiessling 10 hours ago | parent | next [-] | | As someone who’s responsible for some very clean codebases and some codebases that grew over many years, warts and all, I always wonder if being subjected to large amounts of not-exactly-wonderful code has the same effect on an LLM that it arguably also has on human developers (myself included occasionally): that they subconsciously lower their normally high bar for quality a bit, as in „well there‘s quite some smells here, let’s go a bit with the flow and not overdo the quality“. | |
| ▲ | remich 10 hours ago | parent | prev | next [-] | | I have had a lot of success lately when working with Opus 4.5 using both the Beads task tracking system and the array of skills under the umbrella of Bad Dave's Robot Army. I don't have a link handy, but you should be able to find it on GitHub. I use the specialized skills for different review tasks (like Architecture Review, Performance Review, Security Review, etc.) on every completed task in addition to my own manual review, and I find that that helps to keep things from getting out of hand. | |
| ▲ | mkozlows 9 hours ago | parent | prev [-] | | I don't think they generally one-shot the tasks; but they do them well enough that you can review the diff and make requests for changes and have it succeed in a good outcome more quickly than if you were spoon-feeding it little tasks and checking them as you go (as you used to have to do). |
| |
| ▲ | nineteen999 4 hours ago | parent | prev [-] | | Also not a cutting edge user, but do run my own LLM's at home and have been spending a lot of time with Claude CLI last few months. It's fine if you want Claude to design your API's without any input, but you'll have less control and when you dig down into the weeds you'll realise it's created a mess. I like to take both a top-down and bottoms-up approach - design the low level API with Claude fleshing out how it's supposed to work, then design the high level functionality, and then tell it to stop implementing when it hits a problem reconciling the two and the lower level API needs revision. At least for things I'd like to stand the test of time, if its just a throwaway script or tool I care much less as long as it gets the job done. |
| |
| ▲ | drbojingle 10 hours ago | parent | prev [-] | | What's the difference between using llms now vs the first half of 2025 among the best users? | | |
| ▲ | simonw 10 hours ago | parent [-] | | Coding agents and much better models. Claude Code or Codex CLI plus Claude Opus 4.5 or GPT 5.2 Codex. The latest models and harnesses can crunch on difficult problems for hours at a time and get to working solutions. Nothing could do that back in ~March. I shared some examples in this comment: https://news.ycombinator.com/item?id=46436885 | | |
| ▲ | epolanski 8 minutes ago | parent | next [-] | | Cool, but most developers do mundane stuff like glueing APIs and implementing business logic, which require oversight and review. Those crunching hard problems will still review what's produced in search of issues. | |
| ▲ | William_BB 9 hours ago | parent | prev | next [-] | | Ok I will bite. Every single example you gave is in a hobby project territory. Relatively self-contained, maintainable by 3-4 devs max, within 1k-10k lines of code. I've been successfully using coding agents to create such projects for the past year and it's great, I love it. However, lots of us here work on codebases that are 100x, 1000x the size of these projects you and Karpathy are talking about. Years of domain specific code. From personal experience, coding agents simply don't work at that scale the same way they do for hobby projects. Over the past year or two, I did not see any significant improvement from any of the newest models. Building a slightly bigger hobby project is not even close to making these agents work at industrial scale. | | |
| ▲ | rjzzleep 4 hours ago | parent | next [-] | | I think that in general there is a big difference between javascript/typescript projects big or small and other projects that actually address a specific project domain. These two are not the same. The same claude code agent can create a lot of parts of a function web project, but will struggle providing anything functional but a base frame for you to build on if you were to create a new SoC support in some drone firmware. The problem is that everyone working on those more serious projects knows that and treats LLMs accordingly, but the people that come from the web space come in with the expectation that they can replicate the success they have in their domain just as easily, when oftentimes you need to have some domain knowledge. I think the difference simply comes down to the sheer volume of training material, i.e. web projects on github. Most "engineers" are actually just framework consumers and within those frameworks llms work great. | |
| ▲ | simonw 9 hours ago | parent | prev | next [-] | | Most of the stuff I'm talking about here came out in November. There hasn't been much time for professional teams to build new things with it yet, especially given the holidays! | |
| ▲ | baq 8 hours ago | parent | prev | next [-] | | That’s right, but it also hints at a solution: split big code bases into parts that are roughly the size of a big hobby project. You’ll need to write some docs to be effective at it, which also helps agents. CICD means continuous integration continuous documentation now. | | |
| ▲ | bccdee 7 hours ago | parent | next [-] | | Splitting one big codebase into 100 microservices always seems tempting, except that big codebases already exist in modules and that doesn't stop one module's concerns from polluting the other modules' code. What you've got now is 100 different repositories that all depend on each other, get deployed separately, and can only be tested with some awful docker-compose setup. Frankly, given the impedance of hopping back and forth between repos separated by APIs, I'd expect an LLM to do far worse in a microservice ecosystem than in an equivalent monolith. | |
| ▲ | majormajor 8 hours ago | parent | prev | next [-] | | I wonder if anyone has tried this thing before, like... micro-projects or such... ;) | |
| ▲ | rjzzleep 4 hours ago | parent | prev [-] | | It's not the size that's the issue, it's the domain that is. It's tempting to say that adding drivers to Linux is hard because Linux is big, but that's not the issue. |
| |
| ▲ | drbojingle 5 hours ago | parent | prev | next [-] | | Everything is made of smaller parts. I'd like to think we can sub divide a code base into isolated modules at least. | | |
| ▲ | devin 4 hours ago | parent [-] | | In the real world, not all problems decompose nicely. In fact, I think it may be the case that the problems we actually get paid to solve with code are often of this type. |
| |
| ▲ | reactordev 4 hours ago | parent | prev | next [-] | | For what it’s worth, I have Claude coding away at Unreal Engine codebase. That’s a pretty large c++ codebase and it’s having no trouble at all. Just a cool several million lines of C++ lovely. | |
| ▲ | oooyay 6 hours ago | parent | prev [-] | | I worked at Slack earlier this year. Slack adopted Cursor as an option in December of 2024 if memory serves correctly. I had just had a project cut due to a lot of unfortunate reasons so I was working on it with one other engineer. It was a rewrite of a massive and old Python code base that ran Slack's internal service catalog. The only reason I was able to finish rewrites of the backend, frontend, and build an SLO sub-system is because of coding agents. Up until December I'd been doing that entire rewrite through sixteen hour days and just pure sweat equity. Again, that codebase is millions of lines of Python code and frankly the agents weren't as good then as they are now. I carefully used globbing rules in Cursor to navigate coding and testing standards. I had a rule that functioned as how people use agents.md now, which was put on every prompt. That honestly got me a lot more mileage than you'd think. A lot of the outcomes of these tools are how you use them and how good your developer experience is. If professional software engineers have to think about how to navigate and iterate on different parts of your code, then an LLM will find it doubly difficult. |
| |
| ▲ | drbojingle 5 hours ago | parent | prev | next [-] | | Are there techniques though? Tech pairing? Something we know now that we didn't then? Or just better models? | | |
| ▲ | simonw 4 hours ago | parent [-] | | Lots of technique stuff. A common observation among LLM nerds is that if the models stopped being improved and froze in time for a year we could still spend all twelve months discovering new capabilities and use-cases for the models we already have. |
| |
| ▲ | mkozlows 8 hours ago | parent | prev [-] | | I was going back and looking at timelines, and was shocked to realize that Claude Code and Cursor's default-to-agentic-mode changes both came out in late February. Essentially the entire history of "mainstream" agentic coding is ten months old. (This helps me understand better the people who are confused/annoyed/dismissive about it, because I remember how dismissive people were about Node, about Docker, about Postgres, about Linux when those things were new too. So many arguments where people would passionately talk about all those things were irredeemably stupid and only suitable for toy/hobby projects.) | | |
| ▲ | HarHarVeryFunny 5 hours ago | parent [-] | | The entire history of RL-trained "reasoning models" from o1 to DeepSeek_R1 is basically just a year old! |
|
|
|
|
| |
| ▲ | dheera 12 hours ago | parent | prev [-] | | > academic papers take 6-12 months to come out It takes about 6 months to figure out how to get LaTeX to position figures where you want them, and then another 6 months to fight with reviewers | | |
|
|
| ▲ | runtimepanic 12 hours ago | parent | prev | next [-] |
| The title is doing a lot of work here. What resonated with me is the shift from “writing code” to “steering systems” rather than the hype framing. Senior devs already spend more time constraining, reviewing, and shaping outcomes than typing syntax. AI just makes that explicit. The real skill gap isn’t prompt cleverness, it’s knowing when the agent is confidently wrong and how to fence it in with tests, architecture, and invariants. That part doesn’t scale magically. |
| |
| ▲ | asmor 12 hours ago | parent | next [-] | | Is anyone else getting more mentally exhausted by this? I get more done, but I also miss the relaxing code typing in the middle of the process. | | |
| ▲ | epolanski 4 minutes ago | parent | next [-] | | Yes it's taxing and mentally draining, reading code and connecting dots is always harder than writing it. And if you let the AI too loose, as when you try to vibe code an entirely new program, I end up in the situation where in 1 day I have a good prototype and then I can spend easily 5 times as much sorting the many issues and refactoring in order to have it scale to the next features. | |
| ▲ | agumonkey 11 hours ago | parent | prev | next [-] | | I think there are two groups of people emerging. deep / fast / craft-and-decomposition-loving vs black box / outcome-only. I've seen people unable to work at average speed on small features suddenly reach above average output through a llm cli and I could sense the pride in them. Which is at odds with my experience of work.. I love to dig down, know a lot, model and find abstractions on my own. There a llm will 1) not understand how my brain work 2) produce something workable but that requires me to stretch mentally.. and most of the time I leave numb. In the last month I've seen many people expressing similar views. ps: thanks everybody for the answers, interesting to read your pov | | |
| ▲ | remich 10 hours ago | parent | next [-] | | I get what you're saying, but I would say that this does not match my own experience. For me, prior to the agentic coding era, the problem was always that I had way more ideas for features, tools, or projects than I had the capacity to build when I had to confront the work of building everything by hand, also dealing with the inevitable difficulties in procrastination and getting started. I am a very above-average engineer when it comes to speed at completing work well, whether that's typing speed or comprehension speed, and still these tools have felt like giving me a jetpack for my mind. I can get things done in weeks that would have taken me months before, and that opens up space to consider new areas that I wouldn't have even bothered exploring before because I would not have had the time to execute on them well. | |
| ▲ | ronsor 10 hours ago | parent | prev | next [-] | | The sibling comments (from remich and sanufar) match my experience. 1. I do love getting into the details of code, but I don't mind having an LLM handle boilerplate. 2. There isn't a binary between having an LLM generate all the code and writing it all myself. 3. I still do most of the design work because LLMs often make questionable design decisions. 4. Sometimes I simply want a program to solve a problem (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable. | | |
| ▲ | zahlman 10 hours ago | parent [-] | | > I do love getting into the details of code, but I don't mind having an LLM handle boilerplate. My usual thought is that boilerplate tells me, by existing, where the system is most flawed. I do like the idea of having a tool that quickly patches the problem while also forcing me to think about its presence. > There isn't a binary between having an LLM generate all the code and writing it all myself. I still do most of the design work because LLMs often make questionable design decisions. One workflow that makes sense to me is to have the LLM commit on a branch; fix simple issues instead of trying to make it work (with all the worry of context poisoning); refactor on the same branch; merge; and then repeat for the next feature — starting more or less from scratch except for the agent config (CLAUDE.md etc.). Does that sound about right? Maybe you do something less formal? > Sometimes I simply want a program to solve a purpose (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable. Yeah, that sounds about right. |
| |
| ▲ | sanufar 10 hours ago | parent | prev | next [-] | | I think for me, the difference really comes down to how much ownership I want to take in regards to the project. If it’s something like a custom kernel that I’m building, the real fun is in reading through docs, learning about systems, and trying to craft the perfect abstractions; but if it’s wiring up a simple pipeline that sends me a text whenever my bus arrives, I’m happy to let an LLM crank that out for me. I’ve realized that a lot of my coding is on this personal satisfaction vs utility matrix and llms let me focus a lot more energy onto high satisfaction projects | |
| ▲ | zahlman 10 hours ago | parent | prev [-] | | > deep / fast / craft-and-decomposition-loving vs black box / outcome-only As a (self-reported) craft-and-decomposition lover, I wouldn't call the process "fast". Certainly it's much faster than if I were trying to take the same approach without the same skills; and certainly I could slow it down with over-engineering. (And "deep" absolutely fits.) But the people I've known that I'd characterize as strongly "outcome-only", were certainly capable of sustaining some pretty high delta-LoC per day. |
| |
| ▲ | jghn 12 hours ago | parent | prev | next [-] | | That's kind of the point here. Once a dev reached a certain level, they often weren't doing much "relaxing code typing" anyways before the AI movement. I don't find it to be much different than being a tech lead, architect, or similar role. | | |
| ▲ | remich 10 hours ago | parent [-] | | As a former tech lead and now staff engineer, I definitely agree with this. I read a blog post a couple of months ago that theorized that the people that would adopt these technologies the best were people in the exact roles that you describe. I think because we were already used to having to rely on other people to execute on our plans and ideas because they were simply too big to accomplish by ourselves. Now that we have agents to do these things, it's not really all that different - although it is a different management style working around their limitations. | | |
| ▲ | jghn 9 hours ago | parent [-] | | Exactly. I've been a tech lead, have led large, cross-org projects, been an engineering manager, and similar roles. For years, when mentoring upcoming developers what I always to be the most challenging transition was the inflection point between "I deliver most of my value by coding" to "I deliver most of my value by empowering other people to deliver". I think that's what we're seeing here. People who have made this transition are already used to working this way. Both versions have their own quirks and challenges, but at a high level it abstracts. | | |
| ▲ | 9rx 8 hours ago | parent [-] | | LLMs are just a programming language/compiler/REPL, though, so there is nothing out of the ordinary for developers. Except what is different is the painfully slow compile time to code ratio. You write code for a few minutes... and then wait. Then spend a few more minutes writing code... and then wait. That is where the exhaustion comes from. At least in the olden days[1] you could write code for days before compiling, which reduced the pain. Long compilation times has always been awful, but it is less frustrating when you could defer it until the next blue moon. LLMs don't (yet) seem to be able to handle that. If you feed them more than small amounts of code at a time they quickly go off the rails. With that said, while you could write large amounts of code and defer it until the next blue moon, it is a skill to be able to do that. Even in C++, juniors seem to like to write a few lines of code and then turn to compiling the results to make sure they are on the right track. I expect that is the group of people who is most feeling at home with LLMs. Spending a few minutes writing code and then waiting on compilation isn't abnormal for them. But presumably the tooling will improve with time. [1] https://xkcd.com/303/ | | |
| ▲ | recursive 5 hours ago | parent [-] | | Programming languages are structured and have specifications. It is possible to know what code will do just by reading it. | | |
| ▲ | 9rx 4 hours ago | parent [-] | | Well designed ones do, at least. LLMs, in their infancy, still bring a lot of undefined behaviour, which is you end up stuck in the code for a few minutes -> compile -> wait -> repeat cycle. But that is not a desirable property and won't remain acceptable as the technology matures. | | |
| ▲ | recursive 3 hours ago | parent [-] | | I don't see any way this is changing, acceptable or not. | | |
| ▲ | 9rx 3 hours ago | parent [-] | | It is quite possible the tools will never improve beyond where they sit today, sure, but then usage will naturally drift away from that fatiguing use (not all use, obviously). The constant compile/wait cycle is exhausting exactly because it is not productive. Businesses are currently willing to accept that lack of productivity as an investment into figuring out how to tame the tools. There is a lot of hope that all the problems can be solved if we keep trying to solve them. And, in fairness, we have gotten a lot closer than we were just a year or so ago towards that end, so the optimism currently remains strong. However, that cannot go on forever. At some point the investment has to prove itself, else the plug will be pulled. And yes, it may ultimately be a dead end. Absolutely. It wouldn't be the first failure in software development. |
|
|
|
|
|
|
| |
| ▲ | tikimcfee 12 hours ago | parent | prev | next [-] | | Ya know, I have to admit feeling something like this. Normally, the amount of stuff I put together in a work day offers a sense of completion or even a bit of a dopamine bump because of a "job well done". With this recent work I've been doing, it's instead felt like I've been spending a multiplier more energy communicating intent instead of doing the work myself; that communication seems to be making me more tired than the work itself. Similar? | | |
| ▲ | whynotminot 11 hours ago | parent | next [-] | | It feels like we all signed up to be ICs, but now we’re middle managers and our reports are bots. | | |
| ▲ | MikeTheGreat 9 hours ago | parent | next [-] | | I forget where I saw this (a Medium post, somewhere) but someone summed this up as "I didn't sign up for this just to be a tech priest for the machine god". | | |
| ▲ | whstl 8 hours ago | parent [-] | | Someone commented yesterday that managers and other higher-ups are "already ok with non-deterministic outputs", because that's what engineers give them. As a manager/tech-lead, I've kind of been a tech priest for some time. |
| |
| ▲ | senshan 9 hours ago | parent | prev [-] | | > and our reports are bots. With no gossip, rivalry or backstabbing. Super polite and patient, which is very inspiring. We also brutally churning them by "laying off" the previously latest model once the new latest is available. |
| |
| ▲ | perfmode 11 hours ago | parent | prev | next [-] | | You’re possibly not entering into the flow state anymore. Flow is effortless. and it is rejuvenating. I believe: While communication can be satisfying, it’s not as rejuvenating as resting in our own Being and simply allowing the action to unfold without mental contraction. Flow states. When the right level of challenge and capability align and you become intimate with the problem. The boundaries of me and the problem dissolve and creativity springs forth. Emerging satisfied. Nourished. | |
| ▲ | johnsmith1840 9 hours ago | parent | prev [-] | | This is why I think LLMs will make us all a LOT smarter. Raw code made it so we stopped heavily thinking in between but now it's just 100% the most intense thought processes all day long. | | |
| ▲ | falkensmaize 4 hours ago | parent [-] | | It seems pretty obvious that the opposite is true. I know I’ve experienced some serious skill atrophy that I’m now having to actively resist. There’s a lot lost by no longer having to interact with the raw materials of your craft. Thinking is a skill that is reinforced by reading, designing and writing code. When you outsource your thinking to an LLM your ability to think doesn’t magically improve…it degrades. |
|
| |
| ▲ | bccdee 7 hours ago | parent | prev | next [-] | | So far what I've been doing is, I look for the parts that seem like they'd be rewarding to code and I do them myself with no input from the machine whatsoever. It's hard to really understand a codebase without spending time with the code, and when you're using a model, I think there's a risk of things changing more quickly than you can internalize them. Also, I worry I'll get too comfortable bossing chatbots around & I'll become reluctant to get my hands dirty and produce code directly. People talk about ruining their attention spans by spending all their time on TikTok until they can no longer read novels; I think it'd be a real mistake to let that happen to my professional skill set. | |
| ▲ | simonw 12 hours ago | parent | prev | next [-] | | Yes, absolutely, I can be mentally wiped out by lunch. | |
| ▲ | SJMG 10 hours ago | parent | prev | next [-] | | I think it's the serial waiting game and inevitable context switching while you wait. Long iteration cycles are taxing | |
| ▲ | bugglebeetle 11 hours ago | parent | prev | next [-] | | Nah, I don’t miss at all typing all the tests, CLIs, and APIs I’ve created hundreds of times before. I dunno if I it’s because I do ML stuff, but it’s almost all “think a lot about something, do some math, and and then type thousands of lines of the same stuff around the interesting work.” | |
| ▲ | mupuff1234 11 hours ago | parent | prev | next [-] | | For me it's the opposite, I'm wasting less energy over debugging silly bugs and fighting/figuring out some annoying config. But it does feel less fulfilling I suppose. | |
| ▲ | teaearlgraycold 12 hours ago | parent | prev [-] | | I like to alternate focusing on AI wrangling and writing code the old fashioned way. |
| |
| ▲ | AlotOfReading 12 hours ago | parent | prev | next [-] | | It's difficult to steer complex systems correctly, because no one has a complete picture of the end goal at the outset. That's why waterfall fails. Writing code agentically means you have to go out of your way to think deeply about what you're building, because it won't be forced on you by the act of writing code. If your requirements are complex, they might actually be a hindrance because you're going have to learn those lessons from failed iterations instead of avoiding them preemptively. | |
| ▲ | llmslave2 12 hours ago | parent | prev | next [-] | | Does using an LLM to craft Hackernews comments count as "steering systems"? | | |
| ▲ | coip 12 hours ago | parent [-] | | You're totally right! It's not steering systems -- it's cooking, apparently |
| |
| ▲ | codeformoney 11 hours ago | parent | prev | next [-] | | The stereotype that writing code is for junior developers needs to die. Some devs are hired with lofty titles specifically for their programming aptitude and esoteric systems knowlege, not to play implementation telephone with inexperienced devs. | | |
| ▲ | remich 10 hours ago | parent [-] | | I don't think that anyone actually believes that writing code is only for junior developers. That seems to be a significant exaggeration at the very least. However, it is definitely true that most organizations of this size are hiring people into technical lead, staff engineer, or principal engineer roles are hiring those people not only for their individual expertise, or ability to apply that expertise themselves, but also for their ability to use that expertise as a force multiplier to make other less experienced people better at the craft. | | |
| ▲ | codeformonkey 8 hours ago | parent | next [-] | | In my world there are Hard Problems that need to be solved for bu$ine$$ rea$on$, no being a "force multiplier" required (whatever that really means). | |
| ▲ | inkyoto 6 hours ago | parent | prev [-] | | > I don't think that anyone actually believes that writing code is only for junior developers. That is, unquestionably, how it ought to be. However, the mainstream – regrettably – has devolved into a well-worn and intellectually stagnant trajectory, wherein senior developers are not merely encouraged but expected to abandon the coding altogether, ascending instead into roles such as engineering managers (no offence – good engineering managers are important, it is the quality that has been diluted across the board), platform overseers (a new term for stage gate keepers), or so-called solution architects (the ones who are imbued with compliance, governance and do not venture out past that). In this model, neither role is expected – and in some lamentable cases, is explicitly forbidden[0] – to engage directly with code. The result is a sterile detachment from the very systems they are charged with overseeing. Worse still, the industry actively incentivises ill-considered career leaps – for instance, elevating a developer with limited engineering depth into the position of a solution designer or architect. The outcome is as predictable as it is corrosive: individuals who can neither design nor architect. The number of organisations in which expert-level coding proficiency remains the norm at senior or very senior levels has dwindled substantially over the past couple of decades or so – job ads explicitly call out the management experience, knowledge of vacuous or limited usefulness architectural frameworks (TOGAF and alike). There do remain rare islands in an ever-expanding ocean of managerial abstraction where architects who write code, not incessantly but when a need be, are still recognised as invaluable. Yet their presence is scarce. The lamentable state of affairs has led to a piquant situation on the job market. In recent years, headhunters have started complaining about being unable to find an actually highly proficient, experienced, and, most importantly, technical architect. One's loss is another one's gain, or at least an opportunity, of course. [0] Speaking from firsthand experience of observing a solution architect to have quit their job to run a bakery (yes) due to the head of architecture they were reporting to explicitly demanding the architect quit coding. The architect did quit, albeit in a different way. |
|
| |
| ▲ | Madmallard 8 hours ago | parent | prev [-] | | "it’s knowing when the agent is confidently wrong and how to fence it in with tests, architecture, and invariants." Strongly suspect this is simply less efficient than doing it yourself if you have enough expertise. |
|
|
| ▲ | lesuorac 12 hours ago | parent | prev | next [-] |
| > Most Recent Task for Survey > Number of Survey Respondents > Building apps 53 > Testing 1 I think this sums up everybody complaints about AI generated code. Don't ask me to be the one to review work you didn't even check. |
| |
|
| ▲ | AYBABTME 9 hours ago | parent | prev | next [-] |
| It feels like we're doing another lift to a higher level of abstraction. Whereas we had "automatic programming" and "high level programming languages" free us from assembly, where higher level abstractions could be represented without the author having to know or care about the assembly (and it took decades for the switch to happen), we now once again get pulled up another layer. We're in the midst of another abstraction level becoming the working layer - and that's not a small layer jump but a jump to a completely different plane. And I think once again, we'll benefit from getting tools that help us specify the high level concepts we intend, and ways to enforce that the generated code is correct - not necessarily fast or efficient but at least correct - same as compilers do. And this lift is happening on a much more accelerated timeline. The problem of ensuring correctness of the generated code across all the layers we're now skipping is going to be the crux of how we manage to leverage LLM/agentic coding. Maybe Cursor is TurboPascal. |
|
| ▲ | danavar 3 hours ago | parent | prev | next [-] |
| So much of my professional SWE jobs isn't even programming - I feel like this is a detail missed by so many. Generally people just stereotype SWE as a programmer, but being an engineer (in any discipline) is so much more than that. You solve problems. AI will speed up the programming work-streams, but there is so much more to our jobs than that. |
|
| ▲ | websiteapi 12 hours ago | parent | prev | next [-] |
| we've never seen a profession drive themselves so aggressively to irrelevance. software engineering will always exist, but it's amazing the pace to which pressure against the profession is rising. 2026 will be a very happy new year indeed for those paying the salaries. :) |
| |
| ▲ | simonw 12 hours ago | parent | next [-] | | We've been giving our work away to each other for free as open source to help improve each other's productivity for 30+ years now and that's only made our profession more valuable. | | |
| ▲ | websiteapi 11 hours ago | parent | next [-] | | I see little proof open source has resulted in higher wages and not the fact that everything is being digitized and the subsequent demand for such people to assist in such. | | |
| ▲ | simonw 11 hours ago | parent [-] | | I'm not sure how I can prove it, but ~25 years ago building software without open source sucked. You had to build everything from scratch! It took months to get even the most basic things up and running. I think open source is the single most important productivity boost to our industry that's ever existed. Automated testing is a close second. Google, Facebook, many others would not have existed without open source to build on. And those giants and others like them that were enabled by open source employed a TON of people, at competitive rates that greatly increased our salaries. | | |
| ▲ | christophilus 10 hours ago | parent | next [-] | | 25 years ago, I was slinging apps together super fast using VB6. It was awesome. It was a level of productivity few modern stacks can approach. | | |
| ▲ | ipdashc 10 hours ago | parent | next [-] | | I'm too young to have used VB in the workforce, but I did use it in school, and honestly off that alone I'm inclined to agree. I've seen VB namedropped frequently, but I feel like I've yet to see a proper discussion of why it seems like nothing can match its productivity and ease of use for simple desktop apps. Like, what even is the modern approach for a simple GUI program? Is Electron really the best we can do? MS Access is another retro classic of sorts that, despite having a lot of flaws, it seems like nothing has risen to fill its niche other than SaaS webapps like airtable. | | |
| ▲ | simonw 8 hours ago | parent | next [-] | | You can add Macromedia Flash to that list - nothing has really replaced it, and as a result the world no longer has an approachable tool for building interactive animations. | |
| ▲ | whateverboat 5 hours ago | parent | prev [-] | | https://www.youtube.com/watch?v=hnaGZHe8wws This is a nice video on why Electron is the best you might be able to do. | | |
| ▲ | ipdashc 2 hours ago | parent [-] | | Thanks for the link - this is a cool video. Though it seems like it's mostly focusing on the performance/"bloat" side of things. I do agree that's an annoying aspect of Electron, and I do think his justifications for it are totally fair, but I was more so thinking about ease of use, especially for nontechnical people / beginners. My memory of it is very fuzzy, but I recall VB being literally drag-and-drop, and yet still being able to make... well, acceptable UIs. I was able to figure it out just fine in middle school. In comparison, here's Electron's getting started page: https://www.electronjs.org/docs/latest/ The "quick start" is two different languages across three different files. The amount of technologies and buzzwords flying around is crazy, HTML, JS, CSS, Electron, Node, DOM, Chromium, random `charset` and `http-equiv` boilerplate... I have to imagine it'd be rather demoralizing as a beginner. I think there's a large group of "nontechnical" users out there (usually derided by us tech bros as "Excel programmers" or such) that can perfectly understand the actual logic of programming, but are put off by the amount of buzzwords and moving parts involved, and I don't blame them at all. (And sure, don't want to go in too hard on the nostalgia. 2000s software was full of buzzwords and insane syntax too, we've improved a lot. But it had some upsides.) It just feels like we lost the plot at some point when we're all using GUI-based computers, but there's no simple, singular, default path to making a desktop GUI app anymore on... any, I think, of the popular desktop OSes? |
|
| |
| ▲ | cheema33 9 hours ago | parent | prev | next [-] | | > 25 years ago, I was slinging apps together super fast using VB6. It was awesome. It was a level of productivity few modern stacks can approach. If that were too, wouldn't we all be using VB today? | | |
| ▲ | cuu508 an hour ago | parent | next [-] | | Excel (and spreadsheets in general) is not quite the same as VB but is similar in that it solves practical problems and normal people can work with it. | |
| ▲ | majormajor 8 hours ago | parent | prev [-] | | Ever try to maintain a bunch of specialized one-off thrown-together things like that? I inherited a bunch of MS Access apps once ... everything old is new again |
| |
| ▲ | zqna 8 hours ago | parent | prev | next [-] | | Agentic coding is just another rhyme of 25 y/o frenzy of "let's outsource everything to India." The new generation thinks this time is really special with us. Let's check again in 25 years | |
| ▲ | xpe 8 hours ago | parent | prev [-] | | How are you measuring productivity? What one can make with VB6 (final release in 1998) is very far from what can make with modern stacks. (My efficiency at building LEGO structures is unbelievable! I put the real civil engineers to shame.) Perhaps you mean that you can go from idea to working (in the world and expectations of 1998) very quickly. If so, that probably felt awesome. But we live in 2025. Would you reach for VB6 now? How much credit does VB6 deserve? Also think about how 1998 was a simpler time, with lower expectations in many ways. Will I grant advantages to certain aspects of VB6? Sure. Could some lessons be applicable today? Probably. But just like historians say, don't make the mistake of ignoring context when you compare things from different eras. |
| |
| ▲ | throw1235435 11 hours ago | parent | prev | next [-] | | Indeed it did; I remember those times. All else being equal I still think SWE salaries on average would of been higher if we kept it like that given basic economics - there would of been a lot less people capable of doing it but the high ROI automation opportunities would of still been there. The fact that "it sucked" usually creates more scarcity on the supply side; which all being equal means higher wages and in our capitalist society - status. Other professions that are older as to the parent comment already know this and don't see SWE as very "street smart" disrupting themselves. I've seen articles recently like "at least we aren't in coding" from law, accounting, etc an an anecdote to this. With AI at least locally I'm seeing the opposite now - less hiring, less wage pressure and in social circles a lot less status when I mention I'm a SWE (almost sympathy for my lot vs respect only 5 years ago). While I don't care for the status aspect, although I do care for my ability to earn money, some do. At least locally inflation adjusted in my city SWE wages bought more and were higher in general compared to others in the 90's-2000's than on wards (ex big tech). Partly because this difficulty and low level knowledge meant only very skilled people could participate. | | |
| ▲ | ipdashc 10 hours ago | parent | next [-] | | > ex big tech I mean, this seems like a pretty big thing to leave out, no? That's where all the crazy high salaries were! Also, there are still legacy places that more or less build software like it's 1999. I get the impression that embedded, automotive, and such still rely a lot on proprietary tools, finicky manual processes, low level languages (obviously), etc. But those are notorious for being annoying and not very well paid. | | |
| ▲ | throw1235435 10 hours ago | parent [-] | | I'm talking about what I perceive to be the median salary/conditions with big tech being only a part of that. My point is more that I remember back in that period good salaries could be had outside big tech too even in the boring standard companies that you state. I remember banks, insurance, etc paying very well for example compared to today for an SWE/tech worker - the good opportunities seemed more distributed. For example I've seen contract rates for some of the people we hire haven't really changed for 10 years for developers. Now at best they are on par with other professional white collar workers; and the competition seems fiercer (e.g. 5 interviews for a similar salary with leetcode games rather than experienced based interviews). Making software easier and more abstract has allowed less technical people into the profession, allowed easier outsourcing, meant more competition/interview prep to filter out people (even if the skills are not used in the job at all), more material for AI to train on, etc. To the parent comment's point I don't think it has boosted salaries and/or conditions on average for the SWE - in the long run (10 years +) it could be argued that economically the opposite has occurred. |
| |
| ▲ | luckylion 10 hours ago | parent | prev [-] | | Monopolizing the work doesn't work unless you have the power to suppress anyone else joining the competition, i.e. "certified developers only". Otherwise people would have realized they can charge 3x as much by being 5x as productive with better tools while you're writing your code in notepad for maximum ROI, and you would have either adjusted or gone out of business. Increased productivity isn't a choice, it's a result of competition. And that's a good thing overall, even if it sucks for some developers who now have to actually work for the first time in decades. But it's good for society at large, because more things can be done. | | |
| ▲ | throw1235435 10 hours ago | parent [-] | | Sure - I agree with that, and I agree its good for society but as you state probably not as good for the SWE who has to work harder for the same which was my point and I think you agree. Other professions have done what you have stated (i.e. certification) and seen higher wages than otherwise which also proves my point. They see this as the "street smart" thing to do, and generally society respects them for it putting their profession on a higher pedestal as a result. People respect people who take care of themselves first generally I find as well. Personally I think there should be a balance between the two (i.e. a fair go for all parties; a fair day's work with some job security over a standard career lifetime but not extortionary). Also your notion of "better tools" may of not happened, or happened more slowly without open source, AI, etc which would of meant higher salaries for longer most probably. That's where I disagree with the parent poster's claim of higher salaries - AI seems to be a great recent example of "better tools" disrupting the premium SWE's enjoy rather than improving their salaries. Whether that's fair or not is a different debate. I was just doubting the notion of the parent comment that "open source software" and "automated testing" create higher salaries. Usually efficiency economically (some exceptional cases) creates lower salaries for the people who are made more efficient all else being equal - and the value shifts from them to either consumers or employers. |
|
| |
| ▲ | websiteapi 11 hours ago | parent | prev [-] | | even if that's true it's clear enough AI will reduce the demand for swe | | |
| ▲ | simonw 11 hours ago | parent [-] | | I don't think that's certain. I'm hoping for a Jevons paradox situation where AI drives down the cost of producing software to the point that companies that previously weren't in the market for custom software start hiring software engineers. I think we could see demand go up. |
|
|
| |
| ▲ | aussieguy1234 9 hours ago | parent | prev [-] | | This makes sense. Imagine PHP or NodeJS without a framework, or front end development without React. Your projects would take much longer to build. The time saved with the open source frameworks and libraries is more than what an AI agent can save you. |
| |
| ▲ | throw-12-16 2 hours ago | parent | prev | next [-] | | Software Engineers will still exist. Software Devs not so much. There is a huge difference between the two and they are not interchangeable. | |
| ▲ | cheema33 9 hours ago | parent | prev | next [-] | | > we've never seen a profession drive themselves so aggressively to irrelevance. Should we be trying to put the genie back in the bottle? If not, what exactly are you suggesting? Even if we all agreed to stop using AI tools today, what about the rest of world? Will everybody agree to stop using it? Do you think that is even a remote possibility? | | |
| ▲ | dinkumthinkum 9 hours ago | parent [-] | | Does the rest of the world want to make money in a way not involving digging ditches? I feel like people from developing countries that spend 18 hours a day studying, giving their entire childhood to some standardized test, may not want yo be rewarded with no job prospects. Maybe that’s a crazy position. |
| |
| ▲ | mkoubaa 9 hours ago | parent | prev | next [-] | | Don't care have too much to do must automate away my today responsibilities so I can do more tomorrow trvst the plqn | |
| ▲ | zwnow 12 hours ago | parent | prev [-] | | Also it really baffles me how many are actually in on the hype train. Its a lot more than the crypto bros back in the day. Good thing AI still cant reason and innovate stuff. Also leaking credentials is a felony in my country so I also wont ever attach it to my codebases. | | |
| ▲ | aspenmartin 12 hours ago | parent | next [-] | | I think the issue is folks talk past each other. People who find coding agents useful or enjoyable are labeled “on the hype train” and folks for which coding agents don’t work for them or their workflow are considered luddites. There are an incredible number of contradicting claims and predictions out there as well, and I believe what we see is folks projecting their reaction to some amalgamation of them onto others. I see a lot of “they” language, and a lot of viral articles about business leadership “shoving AI down our throats” and it becomes a divisive issue like American political scene with really no one having a real conversation | | |
| ▲ | llmslave2 11 hours ago | parent | next [-] | | I think the reason for the varying claims and predictions is because developers have wildly different standards for what constitutes working code. For the developers with a lower threshold, AI is like crack to them because gen ai's output is similar to what they would produce, and it really is a 10x speedup. For others, especially those who have to fix and maintain that code, it's more like a 10x slowdown. Hence why you have in the same thread, some developer who claims that Claude writes 99% of their code and another developer who finds it totally useless. And of course others who are somewhere in the middle. | | |
| ▲ | throw1235435 11 hours ago | parent | next [-] | | There's also the effect of different models. Until the most recent models, especially for concise algorithms, I felt it was still easier to sometimes do it myself (i.e. a good algo can be concise/more concise than a lossy prompt) and leave the "expansion/repetitive" boilerplate code to the LLM. At least for me the latest models do feel like a "step change" in that the problems can be bigger and/or require less supervision on each problem depending on the tradeoff you want. | |
| ▲ | remich 10 hours ago | parent | prev [-] | | Have you considered that it's a bit dismissive to assume that developers who find use out of AI tools necessarily approve of worse code than you do, or have lower standards? It's fine to be a skeptic. Or to have tried out these tools and found that they do not work well for your particular use case at this moment in time. But you shouldn't assume that people who do get value out of them are not as good at the job as you are, or are dumber than you are, or slower than you are. That's just not a good practice and is also rude. | | |
| ▲ | llmslave2 9 hours ago | parent [-] | | I never said anything about being worse, dumber, and definitely not slower. And keep in mind worse is subjective - if something doesn't require edge case handling or correctness, bugs can be tolerated etc, then something with those properties isn't worse is it? I'm just saying that since there is such a wide range of experiences with the same tools, it's probably likely that developers vary on their evaluations of the output. | | |
| ▲ | remich 8 hours ago | parent [-] | | Okay, I certainly agree with you that different use cases can dictate different outcomes when using AI tooling. I would just encourage everyone who thinks similar to you to be cautious about assuming that someone who experiences a different result with these tools is less skilled or dealing with a less difficult use case - like one that has no edge cases or has greater tolerance for bugs. It's possible that this is the case, but it is just as possible that they have found a way to work with these tools that produces excellent output. | | |
| ▲ | llmslave2 8 hours ago | parent [-] | | Yeah I agree, it doesn't really have to do with skill or different use cases, it's just what your threshold is for "working" or "good". |
|
|
|
| |
| ▲ | mhitza an hour ago | parent | prev | next [-] | | Hard to have a conversation when often the critics of LLM output receive replies like "What, you used last week's model?! No, no, no, this one is a generational leap" Too many people are invested into AI's success to have a balanced conversation. Things will return to normal after a market shakedown of a few larger AI companies. | |
| ▲ | zwnow 12 hours ago | parent | prev [-] | | Its all a hype train though. People still believe in the AI gonna bring utopia bullshit while the current infra is being built on debt. The only reason it still exists is that all these AI companies believe in some kind of revenue outside of subscriptions. So its all about: Owning the infrastructure and enshittify (ads) once enough products are based on AI. Its the same chokehold Amazon has on its Vendors. |
| |
| ▲ | fragmede 12 hours ago | parent | prev [-] | | your credentials shouldn't be in your codebase to begin with! | | |
| ▲ | zwnow 12 hours ago | parent [-] | | .env files are a thing in tons of codebases | | |
| ▲ | iwontberude 11 hours ago | parent | next [-] | | but thats at runtime, secrets are going to be deployed in a secure manner after the code is released | | |
| ▲ | zwnow 11 hours ago | parent [-] | | .env files are used to develop as well, for some things like PayPal u dont have to change the credentials, you just enable sandbox mode. If I had some LLM attached to my codebase, it would be able to read those credentials from the .env file. This has nothing to do with deployment. I never talked about deployment. | | |
| ▲ | Carrok 11 hours ago | parent [-] | | If you have your PayPal creds in your repository, you are doing it wrong. | | |
|
| |
| ▲ | mkozlows 11 hours ago | parent | prev [-] | | If your secrets are in your repo, you've probably already leaked them. |
|
|
|
|
|
| ▲ | geldedus 10 hours ago | parent | prev | next [-] |
| The "Ai-assisted programming" mistaken for "vibe coding" is getting old and annoying |
|
| ▲ | amkharg26 5 hours ago | parent | prev | next [-] |
| The title is provocative but there's truth to it. The distinction between "vibing" with AI tools and actually controlling the output is crucial for production code. I've seen this with code generation tools - developers who treat AI suggestions as magic often struggle when the output doesn't work or introduces subtle bugs. The professionals who succeed are those who understand what the AI is doing, validate the output rigorously, and maintain clear mental models of their system. This becomes especially important for code quality and technical debt. If you're just accepting AI-generated code without understanding architectural implications, you're building a maintenance nightmare. Control means being able to reason about tradeoffs, not just getting something that "works" in the moment. |
|
| ▲ | ramoz 8 hours ago | parent | prev | next [-] |
| > Takeaway 3c: Experienced developers disagree about using agents for software
planning and design. Some avoided agents out of concern over the importance of
design, while others embraced back-and-forth design with an AI. Im in the back-and-forth camp. I expect a lot of interesting UX to develop here. I built https://github.com/backnotprop/plannotator over the weekend to give me a better way to review & collaborate around plans - all while natively integrated into the coding agent harness. |
|
| ▲ | senshan 10 hours ago | parent | prev | next [-] |
| Excellent survey, but one has to be careful when participating in such surveys: "I’m on disability, but agents let me code again and be more productive than ever (in
a 25+ year career). - S22" Once Social Security Administration learns this, there goes the disability benefit... |
| |
| ▲ | LoganDark 10 hours ago | parent [-] | | I think you eventually lose disability benefits anyway once you start making money. |
|
|
| ▲ | andy99 12 hours ago | parent | prev | next [-] |
| Is the title an ironic play on AI’s trademark writing style, is it AI generated, or is the style just rubbing off on people? |
| |
| ▲ | mattnewton 12 hours ago | parent [-] | | I think it’s a popular style before gen ai and the training process of LLMs picked up on that. | | |
| ▲ | andy99 12 hours ago | parent [-] | | That’s not how LLMs work, it’s part of the reinforcement learning or SFT dataset, data labelers would have written or generated tons of examples using this and other patterns (all the emoji READMEs for example) that the models emulate. The early ones had very formulaic essay style outputs that always ended with “in conclusion”, lots of the same kind of bullet lists, and a love of adjectives and delving, all of which were intentionally trained in. It’s more subtle now but it’s still there. | | |
| ▲ | mattnewton 11 hours ago | parent [-] | | Maybe I was being imprecise, but I’m not sure what you mean by “not how LLMs work” - discovering patterns of how humans write is exactly the signal they are trained against. Either explicitly curated like SFT or coaxed out during RLHF, no? It could even have been picked up in pretraining and then rewarded during rlhf when the output domain was being refined; I haven’t used enough LLMs before post training to know what step it usually becomes noticeable. |
|
|
|
|
| ▲ | throw-12-16 an hour ago | parent | prev | next [-] |
| Getting big "I'll keep making saddles in the era of automobiles" vibes from these comments. |
|
| ▲ | banbangtuth 12 hours ago | parent | prev | next [-] |
| You know what. After seeing all these articles about AI/LLM for these past 4 years, about how they are going to replace me as software developers and about how I am not productive enough without using 5 agents and being a project manager. I. Don't. Care. I don't even care about those debates outside. Debates about do LLM work and replace programmers? Say they do, ok so what? I simply have too much fun programming. I am just a mere fullstack business line programmer, generic random replaceable dude, you can find me dime a dozen. I do use LLM as Stack Overflow/docs replacement, but I always code by hand all my code. If you want to replace me, replace me. I'll go to companies that need me. If there are no companies that need my skill, fine, then I'll just do this as a hobby, and probably flip burgers outside to make a living. I don't care about your LLM, I don't care about your agent, I probably don't even care about the job prospects for that matter if I have to be forced to use tools that I don't like and to use workflows I don't like. You can go ahead find others who are willing to do it for you. As for me, I simply have too much fun programming. Now if you excuse me, I need to go have fun. |
| |
| ▲ | llmslave2 12 hours ago | parent | next [-] | | I simply will not spend my life begging and coaxing a machine to output working code. If that is what becomes of this profession, I will just do something else :) | | |
| ▲ | ryanobjc 11 hours ago | parent | next [-] | | If I wanted to do that, I'd just move into engineering management and work with something less temperamental and predictable - humans. I'd at least be more likely to get a boost in impact and ability to affect decision making, maybe. | | |
| ▲ | lifetimerubyist 11 hours ago | parent [-] | | Until you realize you're just begging and coaxing a human to better beg and coax a machine to output working code - when you could just beg and coax the machine yourself. | | |
| |
| ▲ | aspenmartin 12 hours ago | parent | prev [-] | | It would definitely be the profession if we stopped developing things today. Think about the idea of coding agents 2 years ago, I personally found them very unrealistic and am now coding exclusively with them despite them being either a neutral or net negative to my development time simply because I see the writing on the wall that in 6 mos to a year they will probably be a huge net positive and in 2-3 years the dismissive attitude towards adoption will start to look kind of silly (no offense). To me we are _just_ at the inflection point where using and not using coding agents are both totally sensible decisions. |
| |
| ▲ | lifetimerubyist 11 hours ago | parent | prev | next [-] | | Hear hear. I didn't spend half my life getting an education, competing in the corporate crab bucket, retraining and upskilling just to turn into a robot babysitter. | |
| ▲ | yacthing 12 hours ago | parent | prev | next [-] | | Easy to say if you either: (1) already have enough money to survive without working, or (2) don't realize how hard of a life it would be to "flip burgers" to make a living in 2026. We live very good lives as software developers. Don't be a fool and think you could just "flip burgers" and be fine. | | |
| ▲ | banbangtuth 11 hours ago | parent [-] | | Ah, I actually did flip burgers. So I know. I also did dry cleaning, cleaning service, deli, delivery guy, etc. Yup I now have enough money to survive without working. But I also am very low maintenance, thanks to my early life being raised in harsh conditions. I am not scared to go back flipping burgers again. | | |
| ▲ | Madmallard 8 hours ago | parent [-] | | "Yup I now have enough money to survive without working"
Your opinion is borderline irrelevant then. | | |
| ▲ | banbangtuth 8 hours ago | parent [-] | | Indeed, after all I am just replaceable dime a dozen software engineer like I said above. | | |
| ▲ | Madmallard 8 hours ago | parent [-] | | that part doesn't matter it's the part where you don't have to work that matters |
|
|
|
| |
| ▲ | dinkumthinkum 8 hours ago | parent | prev | next [-] | | I hear you but I feel like you (and really others like you, in mass) should not be so passive about your replacement. For most programmers, simply flipping burgers for money to enjoy programming a few hours a week is not going to work. Making a living is a thing. If you are reduced to having to flip burgers that means the economy will gave collapsed and there won’t be any magic Elon UBI money to save us. | | | |
| ▲ | agentifysh 12 hours ago | parent | prev [-] | | having fun isn't tied to employment unless you are self-employed even then what's fun should not be the driving force | | |
| ▲ | lifetimerubyist 11 hours ago | parent | next [-] | | "get a job doing something you enjoy and you'll never work a day in your life" or something like that | |
| ▲ | llmslave2 12 hours ago | parent | prev | next [-] | | That sounds miserable to me :( | | |
| ▲ | agentifysh 12 hours ago | parent [-] | | you work on somebody's dime, its no longer your choice | | |
| ▲ | zem 11 hours ago | parent | next [-] | | it's your choice whose dime you work on. they can compete for your work by making it fun for you. | | |
| ▲ | agentifysh 11 hours ago | parent [-] | | sure unemployment is also a choice | | |
| ▲ | zem 10 hours ago | parent [-] | | fun work > tedious work > unemployment not sure why so many people feel like factoring fun into what job you want to take is so unthinkable, or that it's just a false dichotomy between the ideal job and unemployment | | |
| ▲ | agentifysh 9 hours ago | parent [-] | | you are describing the ideal which is not a reality for many many people as it is not common | | |
| ▲ | zem 8 hours ago | parent [-] | | it's a trade-off; you need a job but you typically interview at several places, collect offers, and weigh them according to various criteria. all the pro-fun posters are saying is that "enjoy the job" is a very highly ranked criterion for us. |
|
|
|
| |
| ▲ | llmslave2 12 hours ago | parent | prev [-] | | It's my life, it's my choice. |
|
| |
| ▲ | banbangtuth 12 hours ago | parent | prev | next [-] | | Why? It is a matter of values. Fun can be a driving force just like money and stability is. It is simply a matter of your values (and your sacrifices). Like I said, I am just a generic replaceable dime a dozen programmer dude. | | |
| ▲ | agentifysh 12 hours ago | parent [-] | | you dont get paid to have fun but to produce as a laborer a job isn't supposed to be fun its nice when it is but it shouldn't be what drives decisions | | |
| ▲ | banbangtuth 11 hours ago | parent [-] | | You mean it shouldn't be the driving force of your employer to make decision. Yes I agree 10000% I meant it can be your (not necessarily your employer) driving decision in life. Of course, you need to suffer. That's about having tradeoffs. | | |
| ▲ | agentifysh 11 hours ago | parent [-] | | almost all employers are going to expect you to use AI and produce more with it you can definitely choose not to participate and give the opportunity someone who are happy to use AI and still have fun with it. | | |
| ▲ | banbangtuth 8 hours ago | parent | next [-] | | Indeed, please find others to do it, not me. | |
| ▲ | tikhonj 10 hours ago | parent | prev [-] | | most organizations have awful leadership, sure but that doesn't mean you can't (or shouldn't) work around it | | |
| ▲ | agentifysh 9 hours ago | parent [-] | | have you tried telling your boss you won't use the AI anymore while the rest of the team uses it ? how do you imagine such conversation to play out im curious | | |
| ▲ | tikhonj 6 hours ago | parent [-] | | what I've done is avoid the sort of boss who would mandate AI use in a past job I did tell a boss that I wasn't going to be doing the whole tickets/estimates/schedule tetris thing, and that actually worked out... because the leaders I worked with understood the value of being flexible and trusting their lead engineers |
|
|
|
|
|
| |
| ▲ | throw-12-16 an hour ago | parent | prev [-] | | i think you angered the hustle bros |
|
|
|
| ▲ | senshan 9 hours ago | parent | prev | next [-] |
| I often tell people that agentic programming tools are the best thing since cscope. The last 6 months I have not used cscope even once after decades of using it nearly daily. [0] https://en.wikipedia.org/wiki/Cscope |
| |
|
| ▲ | game_the0ry 12 hours ago | parent | prev | next [-] |
| > Through field observations (N=13) and qualitative surveys (N=99)... Not a statistically significant sample size. |
| |
| ▲ | bee_rider 12 hours ago | parent | next [-] | | 97 samples is enough to get a 95% confidence level if you accept a 10% margin of error. 99 is not so bad, at least. https://www.surveymonkey.com/mp/sample-size-calculator/ | |
| ▲ | flurie 10 hours ago | parent | prev | next [-] | | This is a qualitative methods paper, so statistical significance is not relevant. The rough qualitative equivalent would instead be "data saturation" (responses generally look like ones you've received already) and "thematic saturation" (you've likely found all the themes you will find through this method of data collection). There's an intuitive quality to determining the number of responses needed based on the topic and research questions, but this looks to me like they have achieved sufficient thematic saturation based on the results. | |
| ▲ | HPsquared 12 hours ago | parent | prev | next [-] | | Significance depends on effect size. | |
| ▲ | superjose 12 hours ago | parent | prev [-] | | Same thoughts exactly. |
|
|
| ▲ | 000ooo000 6 hours ago | parent | prev | next [-] |
| Have to wonder about the motivations of research when the intro leads with such a quote. |
|
| ▲ | zkmon 12 hours ago | parent | prev | next [-] |
| I haven't seen the definition of an agent, in the paper. Do they differentiate agents from generic online chat interfaces? |
| |
| ▲ | senshan 9 hours ago | parent | next [-] | | Page 2:
We define agentic tools or agents as AI tools integrated into an IDE or a terminal that can manipulate the code directly (i.e., excluding web-based chat interfaces) | |
| ▲ | esafak 10 hours ago | parent | prev [-] | | An agent takes actions. Chat bots only return text. | | |
| ▲ | zkmon 2 hours ago | parent [-] | | "takes actions" is automation and its is hardly new. Code was always taking actions over the decades. Interpreting and generating text belongs to chat bots. What's new with agents? |
|
|
|
| ▲ | zwnow 12 hours ago | parent | prev | next [-] |
| Idk, I still mostly avoid using it and if I do, I just copy and paste shit into the Claude web version. I wont ever manage agents as that sounds just as complicated as coding shit myself. |
| |
| ▲ | lexandstuff 11 hours ago | parent [-] | | It's not complicated at all. You don't "manage agents". You just type your prompt into an terminal application that can update files, read your docs and run your tests. As with every new tech there's a hell of a lot of noise (plugins, skills, hooks, MCP, LSP - to quote Kaparthy) but most of it can just be disregarded. No one is "behind" - it's all very easy to use. |
|
|
| ▲ | softwaredoug 8 hours ago | parent | prev | next [-] |
| The new layer of abstraction is tests. Mostly end-to-end and integration tests. It describes the important constraints to the agents, essentially long lived context. So essentially what this means is a declarative programming system of overall system behavior. |
|
| ▲ | SunlitCat 3 hours ago | parent | prev | next [-] |
| Funny how the title alone evokes the old “real programmers” trope https://xkcd.com/378/ |
|
| ▲ | 4b11b4 12 hours ago | parent | prev | next [-] |
| I like to think of it as "maintaining fertile soil" |
|
| ▲ | andrewstuart 9 hours ago | parent | prev [-] |
| Don’t let anyone tell you the right way to program a computer. Do it in the way that makes you feel happy, or conforms to organizational standards. |
| |