Remix.run Logo
ModernMech 2 hours ago

The evidence that software development cannot be automated is we already tried to do it in the 90s with OOP, UML, and outsourcing. It didn’t work out for the same reasons vibe coding isn’t working out — because building the system is the same as specifying it, and that is a creative iterative process.

We are at the point where sure ai can write code, but we could always do that; lack of code writing ability was not what killed the OOP automation efforts. There was plenty ability to code back then as well. The distinction of whether it’s an offshore team in India or Claude writing the code doesn’t change things as far as the larger picture of building the software.

TomasBM an hour ago | parent [-]

This may be evidence that it's more difficult than evangelists first imagine, but it's not evidence of a technical obstacle. Generally, "automation failed" does not imply that "automation is impossible".

To your individual points:

- OOP and UML are domain-specific abstractions. Aside from still being very much used in expanding niches [1], they have failed to automate much work because their proponents failed to cover enough cases to have a useful general-purpose abstraction.

- Outsourcing is a labor strategy. There's nothing technical that prevents another similarly capable person from doing your job, at least in the next town, if not another country. The obstacles were/are social and political, and the WFH movement shows that. Also, outsourcing is not going anywhere, it's just reduced and converted to nearsourcing due to backlash.

- By contrast, software is a general-purpose abstraction [2]. Databases are a type of software. You can see LLMs [3] as schema-less databases that contain millions of abstractions connected to each other. You can get a UML model or Python code or text by querying the LLM's query engine in a language much more flexible than SQL.

Vibe coding makes it seem like the funny intermediate bullshit is the end result of using LLMs, but it's not. Sure, I agree that LLMs don't make sense to use when a calculator is enough, but I don't see any functional limitations to improving LLMs. Maybe new algorithms or combinations are needed, but no matter how slowly, quality is expected to reach at least human level for the majority of current tasks (on which many jobs depend).

Which leads to my point: we need political, social, philosophical reasons to limit or integrate automation in our civilization, not just watch and hope there's a big enough technical obstacle so we can keep our current jobs.

[1] For example, model-based software engineering is still a growing; slowly, but growing.

[2] So is the organization of mechanical machines or analog computers, but it's faster to reorganize and orchestrate electrical signals.

[3] More precisely, foundation models, because it's far more than natural language processing.