| ▲ | WhyOhWhyQ 18 hours ago |
| "the last couple weeks" When I ran this experiment it was pretty exhilarating for a while. Eventually it turned into QA testing the work of a bad engineer and became exhausting. Since I had sunken so much time into it I felt pretty bad afterwards that not only did the thing it made not end up being shippable, but I hadn't benefitted as a human being while working on it. I had no new skills to show. It was just a big waste of time. So I think the "second way" is good for demos now. It's good for getting an idea of what something can look like. However, in the future I'll be extremely careful about not letting that go on for more than a day or two. |
|
| ▲ | danabramov 16 hours ago | parent | next [-] |
| I believe the author explicitly suggests strategies to deal with this problem, which is the entire second half of the post. There’s a big difference between when you act as a human tester in the middle vs when you build out enough guardrails that it can do meaningful autonomous work with verification. |
| |
| ▲ | WhyOhWhyQ 16 hours ago | parent | next [-] | | I'm just extremely skeptical about that because I had many ideas like that and it still ended up being miserable. Maybe with Opus 4.5 things would go better though. I did choose an extremely ambitious project to be fair. If I were to try it again I would pick something more standard and a lot smaller. I put like 400 hours into it by the way. | | |
| ▲ | stantonius 15 hours ago | parent [-] | | This is so relatable it's painful: many many hours of work, overly ambitious project, now feeling discouraged (but hopefully not willing to give up). It's some small consolation to me to know others have found themselves in this boat. Maybe we were just 6 months too early to start? Best of luck finishing it up. You can do it. | | |
| ▲ | WhyOhWhyQ 15 hours ago | parent [-] | | Thank! Yes I won't give up. The plan now is to focus on getting an income and try again in the future. |
|
| |
| ▲ | irrationalfab 15 hours ago | parent | prev [-] | | +1... like with a large enough engineering team, this is ultimately a guardrails problem, which in my experience with agentic coding it’s very solvable, at least in certain domains. | | |
| ▲ | majormajor 10 hours ago | parent [-] | | Like with large engineering teams I have little faith people will suddenly get the discipline to do the tedious, annoying, difficult work of building good enough guardrails now. We don't even build guardrails that keep humans who test stuff as they go from introducing subtle bugs by accident; removing more eyes from that introduces new risks (although LLMs are also better at avoiding certain types of bugs, like copypasta shit). "Test your tests" gets very difficult as a product evolves and increases in complexity. Few contracts (whether unit test level or "automation clicking on the element on the page") level are static enough to avoid needing to rework the tests, which means reworking the testing of the tests, ... I think we'll find out just how low the general public's tolerance for bugs and regressions is. |
|
|
|
| ▲ | stantonius 17 hours ago | parent | prev | next [-] |
| This happened to me too in an experimental project where I was testing how far the model could go on its own. Despite making progress, I can't bare to look at the thing now. I don't even know what questions to ask the AI to get back into it, I'm so disconnected from it. Its exhausting to think about getting back into it; id rather just start from scratch. The fascinating thing was how easy it was to lose control. I would set up the project with strict rules, md files and tell myself to stay fully engaged, but out of nowhere I slid into compulsive accept mode, or worse told the model to blatantly ignore my own rules I set out. I knew better, but yet it happened over and over. Ironically, it was as if my context window was so full of "successes" I forgot my own rules; I reward-hacked myself. Maybe it just takes practice and better tooling and guardrails. And maybe this is the growing pains of a new programmers mindset. But left me a little shy to try full delegation any time soon, certainly not without a complete reset on how to approach it. |
| |
| ▲ | parpfish 17 hours ago | parent [-] | | I’ll chime in to say that this happened to me as well. My project would start good, but eventually end up in a state where nothing could be fixed and the agent would burn tokens going in circles to fix little bugs. So I’d tell the agent to come up with a comprehensive refactoring plan that would allow the issues to be recast in more favorable terms. I’d burn a ton of tokens to refactor, little bugs would get fixed, but it’d inevitably end up going in circles on something new. | | |
| ▲ | danabramov 16 hours ago | parent [-] | | Curious if you have thoughts on the second half of the post? That’s exactly what the author is suggesting a strategy for. | | |
| ▲ | majormajor 10 hours ago | parent [-] | | "Test the tests" is a big ask for many complex software projects. Most human-driven coding + testing takes heavy advantage of being white-box testing. For open-ended complex-systems development turning everything into black-box testing is hard. The LLMs, as noted in the post, are good at trying a lot of shit and inadvertently discovering stuff that passes incomplete tests without fully working. Or if you're in straight-up yolo mode, fucking up your test because it misunderstood the assignment, my personal favorite. We already know it's very hard to have exhaustive coverage for unexpected input edge cases, for instance. The stuff of a million security bugs. So as the combinatorial surface of "all possible actions that can be taken in the system in all possible orders" increases because you build more stuff into your system, so does the difficulty of relying on LLMs looping over prompts until tests go green. |
|
|
|
|
| ▲ | imiric 16 hours ago | parent | prev | next [-] |
| > I think the "second way" is good for demos now. It's also good for quickly creating legitimate looking scam and SEO spam sites. When they stop working, throw them away, and create a dozen more. Maintenance is not a concern. Scammers love this new tech. |
| |
| ▲ | keyle 15 hours ago | parent | next [-] | | Advertising campaigns as well, which, arguably, fits your categories. | |
| ▲ | yen223 15 hours ago | parent | prev [-] | | This argument can be used to shut down anything that makes coding faster or easier. It's not a convincing argument to me. |
|
|
| ▲ | newspaper1 16 hours ago | parent | prev | next [-] |
| I've had the opposite results. I used to "vibe code" in languages that I knew, so that I could review the code and, I assumed, contribute myself. I got good enough results that I started using AI to build tools in languages I had no prior knowledge of. I don't even look at the code any more. I'm getting incredible results. I've been a developer for 30+ years and never thought this would be possible. I keep making more and more ambitious projects and AI just keeps banging them out exactly how I envision them in my mind. To be fair I don't think someone with less experience could get these results. I'm leveraging every thing I know about writing software, computer science, product development, team management, marketing, written communication, requirements gathering, architecture... I feel like vibe coding is pushing myself and AI to the limits, but the results are incredible. |
| |
| ▲ | WhyOhWhyQ 16 hours ago | parent [-] | | I've got 20 years of experience, but w/e. What have you made? | | |
| ▲ | 15 hours ago | parent | next [-] | | [deleted] | |
| ▲ | newspaper1 15 hours ago | parent | prev [-] | | I don't want to dox myself since I'm doing it outside my regular job for the most part, but frameworks, apps (on those frameworks), low level systems stuff, linux-y things, some P2P, lots of ai tools. One thing I find it excels at is web front-end (which is my least favorite thing to actually code), easily as good as any front-end dev I've ever worked with. | | |
| ▲ | WhyOhWhyQ 15 hours ago | parent [-] | | I think my fatal error was trying to make something based on "novel science" (I'll be similarly vague). It was an extremely hard project to be fair to the AI. It is my life goal to make that project though. I'm not totally depressed about it because I did validate parts of the project. But it was a let down. | | |
| ▲ | newspaper1 15 hours ago | parent [-] | | Baby steps is key for me. I can build very ambitious things but I never ask it to do too much at once. Focus a lot on having it get the docs right before it writes any code (it'll use the docs) make the instructions reflexive (i.e. "update the docs when done"). Make libraries, composable parts... I don't want to be condescending since you may have tried all of that, but I feel like I'm treating it the same as when I architect things for large teams, thinking in layers and little pieces that can be assembled to achieve what I want. I'll add that it does require some banging your head against the wall at times. I normally will only test the code after doing a bunch of this stuff. It often doesn't work as I want at that point and I'll spend a day "begging" it to fix all of the problems. I've always been able to get over those hurdles, and I have it think about why it failed and try to bake the reasoning into the docs/tests... to avoid that in the future. | | |
| ▲ | WhyOhWhyQ 15 hours ago | parent [-] | | I did make lots of design documents and sub-demos. I think I could have been cleverer about finding smaller pieces of the project which could be deliverables in themselves and which the later project could depend on as imported libraries. |
|
|
|
|
|
|
| ▲ | black_13 9 hours ago | parent | prev [-] |
| [dead] |