| ▲ | xsh6942 2 hours ago | |
It really depends by what you mean by "it works". A retrospective of the last 6months. I've had great success coding infra (terraform). It at least 10x the generation of easily verifiable and tedious to write code. Results were audited to death as the client was highly regulated. Professional feature dev is hit and miss for sure, although getting better and better. We're nowhere near full agentic coding. However, by reinvesting the speed gains from not writing boilerplate into devex and tests/security, I bring to life much better quality software, maintainable and a boy to work with. I suddenly have the homelab of my dreams, all the ideas previously in the "too long to execute" category now get vibe coded while watching TV or doing other stuff. As an old jaded engineer, everything code was getting a bit boring and repetitive (so many rest APIs). I guess you get the most value out of it when you know exactly what you want. Most importantly though, and I've heard this from a few other seniors: I've found joy in making cool fun things with tech again. I like that new way of creating stuff at the speed of thought, and I guess for me that counts as "it works" | ||
| ▲ | raphaelj 32 minutes ago | parent | next [-] | |
Same experience here. On some tasks like build scripts, infra and CI stuff, I am getting a significant speedup. Maybe I am 2x faster on these tasks, when measured from start to PR. I am working on a HPC project[1] that requires more careful architectural thinking. Trying to let the LLM do the whole task most often fail, or produce low quality code (even with top models like Opus 4.5). What works well though is "assisted" coding. I am usually writing the interface code (e.g. headers in C++) with some help from the agent, and then let the LLM do the actual implementation of these functions/methods. Then I do final adjustments. Writing a good AGENTS.md helps a lot. I might be 30% faster on these tasks. It seems to match what I see from the PRs I am reviewing: we are getting these slightly more often than before. --- | ||
| ▲ | theshrike79 7 minutes ago | parent | prev | next [-] | |
> I suddenly have the homelab of my dreams, all the ideas previously in the "too long to execute" category now get vibe coded while watching TV or doing other stuff. This is the true game changer. I have a large-ish NAS that's not very well organised (I'm trying, it's a consolidated mess of different sources from two deacades - at least they're all in the same place now) It was faster to ask Claude to write me a search database backend + frontend than try to click through the directories and wait for the slow SMB shares to update to find where that one file was I knew was in there. Now I have a Go backend that crawls my NAS every night, indexes files to a FTS5 sqlite database with minimal metadata (size + mimetype + mtime/ctime) and a simple web frontend I can use to query the database ...actually I kinda want a cli search tool that uses the same schema. Brb. Done. AI might be a bubble etc. but I'll still have that search tool (and two dozen other utilities) in 5 years when Claude monthly subsciption is 2000€ and a right to harvest your organs on non-payment. | ||
| ▲ | BrandoElFollito an hour ago | parent | prev | next [-] | |
> I guess you get the most value out of it when you know exactly what you want. Oh yes. I am amateur-developping for 35 years and when I vibe code I let the basic, generic stuff happen and then tell the AI to refactor the way I want. It usually works. I had the same "too boring to code" approach and AI was a revelation. It takes off the typing but allows, when used correctly, for the creative part. I love this. | ||
| ▲ | donw an hour ago | parent | prev | next [-] | |
Same here. You have to slice things small enough for the agent to execute effectively, but beyond that, it’s magic. | ||
| ▲ | andy_ppp an hour ago | parent | prev [-] | |
I honestly find AI quite poor at writing good well thought through tests, potentially because: 1. writing testable code is part of writing good tests 2. testing is actually poorly done in all the training data because humans are also bad at writing tests 3. tests should be more focused around business logic and describing the application than arbitrarily testing things in an uncanny valley of AI slop | ||