| ▲ | jdw64 2 days ago |
| Personally, I prefer vibe coding in the sense of stitching things together at the function-to-method level. Unlike people who take the extreme position that vibe coders are useless, I do think LLMs often write individual functions or methods better than I do. But in a way, that does not fundamentally change the nature of the work. Even before LLMs, many functions and methods were effectively assembled from libraries, Stack Overflow snippets, documentation examples, and copied patterns. The real limitation comes from the nature of transformer-based LLMs and their context windows. Agentic coding has a ceiling. Once the codebase reaches a scale where the agent can no longer hold the relevant structure in context, you need a programmer again. At that point, software engineering becomes necessary: knowing how to split things according to cohesion and coupling, using patterns to constrain degrees of freedom, and designing boundaries that keep the system understandable. In my experience, agentic coding is useful for building skeletons. But if you let the agent write everything by itself, the codebase tends to degrade. The human role is to divide the work into task units that the agent can handle well. Eventually, a person is still needed. If you make an agent do everything, it tends to create god objects, or it strangely glues things together even when the structure could have been separated with a simpler pattern. Thinking about it now, this may be exactly why I was drawn to books like EIB: they teach how to constrain freedom in software design so the system does not collapse under its own flexibility. |
|
| ▲ | wombat-man 2 days ago | parent | next [-] |
| The models are improving. The software that harnesses them is also improving. It wasn't that long ago that the models were quite bad at a lot of the tasks that they are excelling at today. I do agree there's probably a ceiling to what we can get out of these, but I also don't think we have quite hit that point yet. |
| |
| ▲ | jdw64 2 days ago | parent | next [-] | | I agree with what you said. And perhaps my belief that “people like me are still needed” is just a desperate form of self-persuasion. If AI replaces everything, then I become unnecessary. So maybe I am simply trying to convince myself that developers like me are still needed. That said, realistically, I still think there are limits unless the essence of architecture itself changes. I also acknowledge part of your perspective. Those of us who are not in the AI field tend to experience AI progress not as a linear or continuous process, but as a series of discrete events, such as major model releases. Because of that, there is inevitably a gap in perspective. People inside the industry, at least those who are not just promoting hype, often seem to feel that technological progress is exponential. But since we are not part of that industry, we experience it more episodically, as separate events. At the same time, capital has a self-fulfilling quality. If enough capital concentrates in one direction, what looked like linear progress may suddenly accelerate in an almost exponential way. However, even that kind of model can eventually hit a specific limit. I do not know when that limit will arrive, because I am not an AI industry insider. More precisely, I am closer to someone who uses Hugging Face models, builds around them, and serves them, rather than someone working on AI R&D itself. | | |
| ▲ | tharkun__ 2 days ago | parent | next [-] | | “people like me are still needed” is just a desperate form of self-persuasion.
No, no it's not. I've seen what "PM armed with an LLM" will do. Trust me, if you're a decent enough Full Stack software engineer that can take an idea and run with it to implement it, you'll have a leg up over the PM with the idea that has no idea how to "do computers".Most of what these PMs can produce nowadays turns boardroom heads, sure. But it's just that: visuals and just enough prototype functionality that it fools the people you're demoing to. Seen enough of these in the recent past. Will there be some PMs that can become "software developers" while armed with an LLM? Sure! But that's not the majority. On the other hand, yes there are going to be "software developers" that will be out of a job because of LLMs, because the devs that were FS and could take an idea from 0-1 with very little overhead even in the past can now do so much faster and further without handing off to the intermediates and juniors. They mentor their LLM intern rather than their intermediates and juniors. The perpetual intermediate devs with 20 years of experience are the ones that are gonna have a larger and larger problem I'd say. The Staff engineer that was able to run circles around others all along? They'll teach their LLM intern into an intermediate rather than having to "10 times" a bunch of perpetual intermediates with 20 years of experience. | | |
| ▲ | rufasterisco 2 days ago | parent | next [-] | | I agree with you overall, yet there’s one flow that works for me.
Instead of speccing out a feature, I let PMs vibe code it.
I then have the exact reference I need to build. Maybe LLMs oneshotted the right way, maybe it needs fixes, maybe some fundamentals are misunderstood, in any case it’s easier for me to know what I need to build, for the PM to be aware of some limitations (LLMs do the job of pushing back and explaining) and overall for us to have to the point conversations. It is somewhat orthogonal to what you say, when you focused on dev seniority, so that part stands true. But I think “PMs armed with an LLM” can, when properly used, add a lot of value to the dev process. | | |
| ▲ | nunez a day ago | parent | next [-] | | > I agree with you overall, yet there’s one flow that works for me. Instead of speccing out a feature, I let PMs vibe code it. I then have the exact reference I need to build. Like BDD, but with something more accessible than Cucumber. I'm totally here for that. It would be nice if people also committed their initial prompt and chat session with the LLM into their codebase. From a corporate standpoint, having that would be excellent business logic as code, if the code is coming from a PM or a stakeholder on the business side of the house. From an engineering standpoint, it would be an excellent addendum to the codebase's documentation. | | |
| ▲ | tharkun__ 9 hours ago | parent | next [-] | | FWIW, BDD and frameworks like Cucumber don't work at all in my experience. The people that'd need to fill these out don't do it properly (they can't) and then we, devs, are stuck with brittle and un-debuggable stuff that's worse than if we just used regular code to encode what we understood from them. It's the same reason (most) PMs armed with an LLM still won't get anything usable done. They can't do it properly. They still need devs. But the gaps are shrinking. Some few PMs can get stuff done w/ both Cucumber, could wireframe UX with previous tools and can now do so much easier and better with an LLM. It would be nice if people also committed their initial prompt and chat session with the LLM into their codebase
I doubt you'd want this. It's a chat session for a reason. It's gonna be huge wall of text, especially if you meant to actually include all the internal prompting the LLM did while it was working. You'd also have all my "no dude, stop bullshitting me! I told to ignore X and use Y and to always double check Z and provide proof".It would only "work" if every single piece of feature you wrote was 100% written by the LLM from a single, largish and well defined prompt, the LLM works for a few hours and out comes the feature. And even then you have no reproducability (even if you turned around and gave it to the exact same model, no retraining, newer model, system prompt etc.). | | |
| ▲ | rufasterisco 6 hours ago | parent [-] | | There are ways to play around the single wall of test issue.
Mostly, git lfs. When it comes to “no dude stop etc etc” … that is valuable information. You can extract that and put down rules for agents so that you stop repeating it each time. Same can be done at PR, so that you can review not just the code but also how you got there. It’s trivial to go from session to a nicely polished html with side by side conversation. If you want to try, username at gmail, I have a private repo with it running.
I value critics, sorry for the plug ;) Oh, on the different models side, i don’t see the advantage of reproducibility, or better, I don’t think I understand what you mean, can you help me see it? | | |
| ▲ | tharkun__ 23 minutes ago | parent [-] | | I don't understand how "wall of text" is related to git large file support. The wall of text is a problem for me, the human. Sure, there are ways, like "be brief", caveman etc. In a large repo with lots of different people over time, I can't see how it won't just be wall of text again. It's just too much. TL;DR. And coz DR, the LLM will have buried bullshit in that text, which future session might read and "believe". As for "no dude", no that can't be put down into rules. Not all of it anyway. We have stuff encoded in the repo wide md file, I have my personal one etc. and the various agents still don't do what we tell them to in all cases or a new model comes out and it no longer works. For example, for finding the root cause of a bug, it's very important to have actual proof and references. It's getting there w/ my instructions in the .md but it doesn't always work and I do have to "dude" it from time to time. Is that back and forth valuable to have in files that are going to be part of the repo? I very highly doubt it. Having new rules that came out of the back and forth in a checked in AGENTS.md, sure, that is valuable. I've seen enough PR descriptions created by the agent. Fluffy wall of text that looks good but is factually wrong. Seen it way too many times. Too many people just look at whether it looks good and then pass it off as truth. I'm tired of it and making that into "nice HTML" doesn't make it better. It just makes it look even nicer but not more true. Re: reproducibility. My parent poster (and I guess you as well) wanted to have the prompt/conversation as "documentation". I don't see why that would be helpful. The only reason I could see would be for "reproducibility", which you won't get with an LLM. I don't see why else, but do tell me. What I can agree could be valuable are the "why"s. I.e. the stuff that already should have been part of the ticket/requirements document. If you want to store that inside the repo as text files, instead of the original tickets or documents, that's fine of course. But I don't see how a "recording of how the code came to be" is valuable. It's like having a recording of all my IDE keystrokes and intermediate code state in pre-LLM days. Not valuable. What's valuable are the requirements and the outcome (i.e. code). Not "the thing in between". Now don't get me wrong. Recordings of how people code/use their IDE can be a valuable teaching tool. Both as good and bad examples. And the same can be true for an agent coding session. |
|
| |
| ▲ | rufasterisco 7 hours ago | parent | prev [-] | | I am actually working on that.
Want to beta test? :) Can invite you to the, for now private github repo. Any feedback would be helpful! |
| |
| ▲ | fatata123 a day ago | parent | prev [-] | | [dead] |
| |
| ▲ | rogual 2 days ago | parent | prev | next [-] | | What I'd love to see is videos of nontechnical folks using language models to create software. When I use them myself, I just see them crushing it and think, this thing is now doing my job for basically $0, I am no longer economically relevant. But I've spent a lifetime learning to program, so it's possible I only get good results because of the way I think to prompt it. I really can't get the outside view so I can't decide whether AI is going to make me homeless or not. I think we need the videos. | | |
| ▲ | _aavaa_ 21 hours ago | parent [-] | | If you need comfort just read the story of the week where a “technical” founder gave the LLM full access to their production environment and it wipes everything. |
| |
| ▲ | ricardobayes 2 days ago | parent | prev | next [-] | | Oddly, devops seems to be the "last bastion" of our trade, as they seem to be only ones pushing back against PM vibe-coded stuff. Usually while those projects look aesthetically pleasing, they start to fall apart when met with devops requirements for environment values, cybersecurity, etc | |
| ▲ | dasil003 2 days ago | parent | prev | next [-] | | I agree with you, so far what I see is that AI amplifies an individuals output in many domains, but the value of that is 100% contingent on their judgment. It changes the economics of many tasks, but fundamentally it can't really help you if you don't actually know what you want—which is sort of a shocking number of people in the corporate world where most people are there for a paycheck, and perhaps to pursue some social marker of "success". I'm under no illusions about the goals of AI company execs to justify their valuations (and expenses!) by capturing a huge chunk of global employment value, and the CEOs of many big companies whose financials are getting squeezed for all sorts of reasons and are all too happy to jump on the efficiency narrative of AI to justify layoffs that would have been necessary anyway. Also, AI will keep getting better and it could certainly will move up the food chain—it's already replaced a lot of what I did and I assume capabilities will continue improving for a while even after model capabilities plateau as we improve harnessing, tooling and practice. So yeah, it can replace a lot of what we do, but I'm not running scared because every step of the way I've seen software people are the ones who actually get the most out of LLMs. Sure it can write all the code so the job changes, but even our workflows completely change, it's giving us more of an edge (if we're open to it) than it does to anyone non-technical. At this stage it still feels empowering on an individual level. Now I do worry about the consolidation of power and wealth in a tech oligarchy, but that's an issue we need to deal with at a societal and government policy level. Essentially, I can see AI as having radically different outcome potential based on how it's governed. In one way it can be very empowering to small teams, and reduce coordination costs, and increase competition by allowing smaller groups of people to make more scalable companies. But it could also lead to unprecedented concentration of wealth and power if a small set of AI companies are allowed to capture all the economic gains. I don't think there are any easy answers, but I do feel hopeful that we can figure something out as a society—it certainly seems to be creating some unified sentiment across political lines that have been so polarized and divisive over the last decade. | | |
| ▲ | cushycush 2 days ago | parent | next [-] | | It amplifies by 1000x is the problem for our jobs. However, I do agree that developers with experience are needed to actually harness these tools. I’ve been able to do wonders with them, but I can’t see a junior dev doing 10% of the work that I can with them. | |
| ▲ | TheOtherHobbes 2 days ago | parent | prev [-] | | It's a strategy problem, and the current version of the US is spectacularly bad at strategy. Once upon a time the US had visionaries steering DARPA and making useful bets on the future. Now strategy is defined by stonks-go-up, quarterly returns, democracy bad, and CEO narcissism, and that's a potently catastrophic combination. |
| |
| ▲ | bambax 2 days ago | parent | prev [-] | | I think this is exactly correct. |
| |
| ▲ | wombat-man a day ago | parent | prev | next [-] | | I have a more optimistic take. Those of us who have done it by hand for a while are armed with that experience. Yes you can just use an LLM to do everything now, but I think it's tough to supervise it on tasks that you've never actually had to do. Maybe that won't be as important as I think, but I think that I'd have learned a lot less in school if I just used an LLM to code everything. Day to day, the resolution of our work is probably different. We're zooming out and spending more time strategizing and managing the AI tooling. This might mean less jobs. It might also mean we just get more done. I don't work on AI directly either, but I'm finding a lot of value in learning the new tooling. I think being able to competently leverage these tools is going to be a key skill from now on. | |
| ▲ | riffraff 2 days ago | parent | prev [-] | | I'm with you at the "bargaining" phase of AI grief (sure AI is useful but it won't replace me!). I think my reasoning is you still need a tech person to translate from feature to architecture. AI can do both but not everyone knows they need the latter. | | |
| ▲ | cushycush 2 days ago | parent [-] | | Of course but unfortunately it reduces the amount of jobs 100x or more. You don’t need 30 software developers at a startup anymore. You just need one. | | |
| ▲ | PsylentKnight a day ago | parent | next [-] | | This sounds like the lump of labor fallacy It seems almost certain to me that AI is going to increase the surface area of what it’s possible for programs to do and therefore massively induce demand for more programs I think the part that remains to be seen is whether a sufficient percent of that new work will be done by humans such that overall demand for the humans doesn’t collapse Personally I think us humans will be ok for at least a few more years | | |
| ▲ | tavavex a day ago | parent [-] | | > It seems almost certain to me that AI is going to increase the surface area of what it’s possible for programs to do and therefore massively induce demand for more programs Have we seen any of that yet? If anything, the most popular modern projects out there are all AI tooling, basically recursive software to help with using AI. Have you seen any truly novel software that solves new problems? Even before AI, I've been worried that most of the problems that were possible and viable to solve have run dry, leading to tech chasing hype and the next big thing over practical issues that have already been scooped up by someone else. What new problems have been added? |
| |
| ▲ | riffraff a day ago | parent | prev | next [-] | | I think it won't reduce jobs by 100x but yes, some jobs will be lost. I think it's right to put effort towards necessary regulatory and political changes, but there's no point in trying to deny the change. | |
| ▲ | slopinthebag 2 days ago | parent | prev [-] | | That just means you can have 30x more startups! |
|
|
| |
| ▲ | ponector a day ago | parent | prev | next [-] | | If everything is improving why the quality of the released software is going down? | |
| ▲ | ares623 2 days ago | parent | prev [-] | | At $800B collective spend, you would hope these things are improving. The point is that have the improvements been worth $800B and counting. | | |
| ▲ | wombat-man a day ago | parent | next [-] | | I think part of the motivation for the big spend by the big players is to choke out Anthropic and OpenAI. They're going to make sure they're they only ones scaling up the huge capacity they expect is needed. To meet demand, Anthropic is just going to need to pay the cloud bill to somebody, which will really hamstring their ability to profit. | |
| ▲ | vasco 2 days ago | parent | prev [-] | | Yes for sure, even if we stopped today the amount of almost free software that can be produced with current models will improve the world by a lot as the knowledge of how to use it propagates over more people. | | |
| ▲ | ygrr a day ago | parent | next [-] | | The problem with this argument is that it shouldn’t take years for these developments to come about anymore. The world is incredibly interconnected via the web - it also explains ChatGPT’s explosive growth. To claim people aren’t trying would be comical - where there’s an opportunity to generate economic profits competition for it will be intense. The best we have external to model producers is cursor and openclaw lmao. The gap between hype and reality is disgustingly large. | | |
| ▲ | vasco a day ago | parent [-] | | I don't think you're correct. Just think of things like using any computer system in your business, like a spreadsheet to keep track of inventory. From the moment software for spreadsheets became available to most businesses using them, how many years went by? I knew businesses that should have computerized processes that didn't in like 2010. So if you just apply this knowledge that even basic good things take a long time to truly spread and permeate, even if the tech stopped advancing today, the current benefits will take years to fully materialize. There's many "little software tools for X" that now any business owner with a few hours can create. I know many people improving their small businesses for free like this and helped a few friends making their lives easier with "small software" assisted by AI. People that would never afford 20 SaaS products for this and that, and would never go through the hassle of hiring someone to do it custom. And they will be able to do this even if the bubble pops and all the labs go bankrupt by just setting up a little gpu with a local model. I dunno about hype, I just know I have several friends running self made custom software "in production" for small things and almost no help for their classic "offline" businesses. |
| |
| ▲ | ares623 2 days ago | parent | prev [-] | | Can't wait to eat software for dinner |
|
|
|
|
| ▲ | bambax 2 days ago | parent | prev | next [-] |
| Yes, but I don't think having LLMs only write functions, and doing the architecture yourself qualifies as "vibe coding": rather "AI-assisted engineering" (which is what I do). Vibe coding, to me, means having an LLM, with or without agents, do everything after an initial vague prompt. Which is why "anyone" can vibe code (because anyone can write general hand-waving imprecise instructions). This inevitably results in pointless demos and/or unmaintainable monsters. |
|
| ▲ | 8note 2 days ago | parent | prev | next [-] |
| its not necessarily better, but its certainly good enough, if youre already used to distributing work to different people the scale of code doesnt really matter that much, as long as a programmer can point it at the right places. i think actually you want to be really involved in the skeleton, since from what ive seen the agent is quite bad at making skeletons that it can do a good job extending. if you get the base right though, the agent can make precise changes in large code bases |
| |
| ▲ | jdw64 2 days ago | parent | next [-] | | Thinking about it, I think what is interesting about the output of agentic coding is this: I mostly agree with the general tendency that it starts to break down as the context grows. But there is also a difference in how people evaluate it. Some people say agents are good at building the skeleton, while others say they are better at extending an existing structure. I think this depends on the setup, and it is ultimately a trade-off. In my case, I usually work on codebases around 60,000 LoC. The programs I deliver are generally between 60,000 and 80,000 lines of code. I think I can fairly call myself a specialist at that scale, since I have personally delivered close to 40 projects of that size. At that scale, I felt that agentic coding was actually very good at building the initial skeleton. I do not know what kind of work you usually do, but if your work involves highly precise, low-level tasks, then I can understand why you might feel differently. In my case, I mostly assemble high-level libraries and frameworks into working systems, so that may be why I experience it this way. | | |
| ▲ | sroussey 2 days ago | parent | next [-] | | The coding agents are good at growing code. Like a child growing up! Also, like a cancer. Similar process, different outcomes. | | |
| ▲ | cdud3 2 days ago | parent [-] | | That's why we started to force our developers to take ownership and responsibilities of what there AI ships to other developers for review. It's stunning how the amount of code decreases and the quality of the deliveries improves when developers put extra effort in to iterate on decreasing the complexity AI introduces. In lot of cases you can vibe code that too when understanding the output and guiding your AI on the path. |
| |
| ▲ | slopinthebag 2 days ago | parent | prev [-] | | I think it's just the context in which it's working in. 1m lines of html are infinitely more conducive for a language model to work in than 10k lines of complex multithreaded low level code. A lot of coding is just rehashing the same concepts in slightly novel ways, language models work great in this context as code gen machines. The hope is that we can focus our efforts on harder problems, using language models as a tool to make us more productive and more powerful, and with the advancements open weight models have made, also less reliant on big tech companies to do so. |
| |
| ▲ | energy123 2 days ago | parent | prev [-] | | I find LLMs are good at skeletons but only if you are tedious about writing down what you want before you start. Then give that text to GPT 5.5 Pro, and be prepared for a number of iterations. |
|
|
| ▲ | ricardobayes 2 days ago | parent | prev | next [-] |
| Yes that is all true. LLMs are excellent in providing a single function, but decision-makers extrapolated that capability so they thought LLMs can work on their own with minimal or no supervision. That's not going to be realistic for a very long time. |
|
| ▲ | altern8 2 days ago | parent | prev | next [-] |
| How long before they will raise the amount of context it can hold? Or, it there a ceiling that we can't go passed? |
| |
| ▲ | disgruntledphd2 2 days ago | parent [-] | | The context length scales quadratically in terms of compute, so barring algorithmic improvements, there's definitely a limit. | | |
|
|
| ▲ | rcpt 2 days ago | parent | prev | next [-] |
| We all got agents at work now and still the engineers haven't equalized |
|
| ▲ | block_dagger 2 days ago | parent | prev | next [-] |
| The ceiling will soon be super-human. |
| |
| ▲ | nevertoolate 2 days ago | parent [-] | | What do you base this on? For me it is almost impossible to guess what fits into the context of an llm. Sometimes trivial tasks fail, sometimes quite complex things get one shotted. |
|
|
| ▲ | DalekBaldwin 2 days ago | parent | prev | next [-] |
| EIB? |
| |
|
| ▲ | colechristensen 2 days ago | parent | prev | next [-] |
| I've found the LLM limitation of codebase size is removed with correct design of the codebase. If you organize your product into a collection of appropriately scoped libraries (the library is the right size for the LLM to be able to comprehend the whole thing) then the project size is not limited by the LLM comprehension. Your task management has to match, the organization of your ticketing system has to parallel the codebase. With this the LLM can think at different scales at different times. |
| |
| ▲ | slopinthebag 2 days ago | parent [-] | | Yeah but this is just regular programming. Of course you can break things down into the right atomic units where a code gen machine becomes useful. Because you are an expert. People who aren't literally have no clue. In any task, you can break it down no matter how complex into units where a language model can output useful code. The more complex, the smaller the units. At some point it's faster to write it yourself, thats the limit on the codegen. I still don't see how it's anything else than a tool that experienced and knowledgeable workers can use to save time and energy to focus on the hard parts. |
|
|
| ▲ | slopinthebag 2 days ago | parent | prev [-] |
| I agree. Language models are good at codegen, in some sense they are just another codegen tool, except instead of transforming a structured language (like a config file or markdown) into code, they can convert natural language into code. Genuinely useful for the repetitive boilerplate grunt work. If that's all you do, then I can see fearing getting replaced. Thankfully by handling the drudgery, it frees us up to work on more complex and cutting edge work. Like, it's not surprising that the developers who frequently talk about +90% of their work being delegated to LLMs are web developers. That is a field with very little innovative or complex code, it's mostly just grunt work translating knowledge of style rules and markup to code, or managing CRUD. I'm really thankful I can have a language model do that drudgery for me. But compare that to eg. writing a multithreaded multiplayer networking service in Rust, they fall woefully short at generating code for me. They can be used in auxiliary aspects, like search or debugging, but the code it produces without substantial steering is not usable. It's often faster for me to write the code myself, because it's not a substantial amount of low impact code required, but a small amount of complex high impact code which needs to satisfy many invariants. This is fast to type, the majority of the work is elsewhere. At the end of the day, they work really well to replace typing the boilerplate, which is much appreciated. |
| |
| ▲ | ngruhn 2 days ago | parent [-] | | Try to get an animation just right without human guidance. It's difficult to give the agent feedback on its work. With browser MCP the agent can only make screenshots and see a single frame of the animation. Also agents are quite slow with browser handling. If the animation starts when a button is clicked, the animation is usually over before the agent has taken the screenshot. All behavior of backend code can at least be described with automated tests. | | |
| ▲ | slopinthebag 2 days ago | parent [-] | | Yeah like I don't mean to demean front end work because there is a lot of stuff that isn't gruntwork or boilerplate, especially in the artistic fields or UI that is actually really complex. I actually made my initial career off of UI/UX. And a lot of the CRUD backend stuff really is literally just shuffling data in the most boring and replicated way as well. I guess my point is more that we have a lot of code being written that probably should have been automated already in some way, but it was simply more practical to just have people writing it. I dont see much harm in automating it with AI - the people doing the grunt work are largely capable of more, but at the end of the day someone has to dig the ditches. Now that we have a backhoe they can go do more interesting stuff. However when I see people who were largely writing meaningless boilerplate now claiming that software development is dead because they've become automated, I think it's important that people are being realistic about the different contexts in which AI is either useful or not. There is a wide range of experiences, some people believe AI is useful in completely automating their jobs and others feel it's mostly useless, and of course most people are in the middle somewhere. They're all correct, but the context is crucial. As far as I'm concerned it's just another tool in the toolbox. |
|
|