| ▲ | vb-8448 14 hours ago |
| It's not just about "building" ... who is going to maintain all this new sub-par code pushed to production every day? Who is going to patch all bugs, edge cases and security vulnerabilities? |
|
| ▲ | Havoc 11 hours ago | parent | next [-] |
| Nobody. In fact looking at the vibecoders enthusiasm for serverless I’m expecting a sharp spike in surprise cloud bills never mind thinking about edge case |
|
| ▲ | sdoering 14 hours ago | parent | prev | next [-] |
| I happily got rid of a legacy application (lost the pitch, another agency now must deal with the shit) I inherited as a somewhat technically savvy person about a year ago. It was built by real people. Not a single line of AI slop in it. It was the most fragile crap I had ever the misfortune to witness. Even in my wildest vibe coding a prototype moments I was not able to get the AI to produce that amount of anti patterns, bad shit and code that would have had Hitchcock running. I think we would be shocked to see what kind of human slop out there is running in production. The scale might change, but at least in this example, if I had rebuilt the app purely by vibe coding the code quality and the security of the code would actually have improved. Even with the lowest vibe coding effort thinkable. I am not in any way condoning (is this the right word) bad practices, or shipping vibe code into prod without very, very thorough review. Far from it. I am just trying to provide a counter point to the narrative, that at least in the medium sized business I got to know in my time consulting/working in agencies, I have seen quite a metric ton of slop, that would make coding agents shiver. |
| |
| ▲ | neom 14 hours ago | parent | next [-] | | DigitalOcean version 1 was a duck taped together mash of bash, chron jobs and perl, 2 people out of 12 understood it, 1 knew how to operate it. It worked, but it was insane, like really, really insane. 0% chance the original chatgpt would have written something as bad as DO v1. | | |
| ▲ | an0malous 11 hours ago | parent [-] | | Are you suggesting the original ChatGPT could build DigitalOcean? | | |
| ▲ | neom 11 hours ago | parent [-] | | To me, built and written are not the same. Built: OK, maybe that's an exaggeration. But could an early "this is pretty good at code" llm have written digitalocean v1? I think it could, yes (no offense Jeff). In terms of volume of code and size of architecture, yeah it was big and complex, but it was literally a bunch of relatively simple cron, bash and perl, and the whole thing was very...sloppy (because we were moving very quickly) - DigitalOcean as I last knew of it (a very long time ago), transformed to a very well written modern go shop. (Source: I am part of the "founding team" or whatever.) |
|
| |
| ▲ | vb-8448 14 hours ago | parent | prev | next [-] | | AI doesn't overcome the limits of the one who is giving the input, like in pre-ai era SW, if the input sucks the output sucks. What changed is the speed: AI and vibe coding just gave a turboboost to all you described. The amount of code will go parabolic (maybe it's already parabolic) and, in the mid-term, we will need even more swe/sre/devops/security/ecc to keep up. | |
| ▲ | geon 14 hours ago | parent | prev [-] | | The argument isn’t that all slop is AI, but that all AI is slop. | | |
| ▲ | baq 14 hours ago | parent | next [-] | | Turns out building enterprise software has more in common with generating slop than not. | |
| ▲ | 14 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | mountainriver 13 hours ago | parent | prev | next [-] |
| I hear this argument all the time but it seems to leave out code reviews |
| |
| ▲ | nsxwolf 13 hours ago | parent [-] | | In teams of high performers who have built a lot of mutual trust, code reviews are mostly a formality and a stop gap against the big, obvious accidental blunders. "LGTM!" I do not know or trust the agents that are putting out all this code, and the code review process is very different. Watching the Copilot code review plugin complain about Agent code on top of it all has been quite an experience. |
|
|
| ▲ | soco 14 hours ago | parent | prev [-] |
| The theory goes very simple, you tell the agent to patch the bug. Now the practice though... |
| |
| ▲ | fullstackwife 14 hours ago | parent | next [-] | | yeah, in practice: would you like to onboard a Boeing 747 where some of the bugs were patched by some agents, what is the percentage risk of malfunction you are going to accept as a passenger? | | |
| ▲ | emodendroket 14 hours ago | parent | next [-] | | No. But most software products are nowhere near that sensitive and very few of them are developed with the level of caution and rigor appropriate for a safety-critical component. | |
| ▲ | TuringNYC 14 hours ago | parent | prev [-] | | >> yeah, in practice: would you like to onboard a Boeing 747 where some of the bugs were patched by some agents, In this case, the traditional human process hasn't gone well either. | | |
| ▲ | geon 14 hours ago | parent | next [-] | | It is working great as long as it is adhered to and budgeted. | |
| ▲ | fullstackwife 14 hours ago | parent | prev | next [-] | | human process is the understanding that the mistakes will make people die | |
| ▲ | dboreham 14 hours ago | parent | prev [-] | | The bugs were mostly caused by MBAs, who one assumes will remain. |
|
| |
| ▲ | Havoc 14 hours ago | parent | prev [-] | | You are a senior expert. SENIOR EXPERT :D [0] https://www.youtube.com/shorts/64TNGvCoegE |
|