Remix.run Logo
woeirua 4 hours ago

This doesn’t constitute AI psychosis. His argument is that we need to retain understanding of the systems we use, but there’s no compelling argument as to why that is the case. (I get that people are going to be offended by that statement, but agents are already better than the average software engineer. I don’t see why we need to fight this, except for economic insecurity caused by mass layoffs.)

It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.

9dev 4 hours ago | parent | next [-]

If you want to draw that line of argument - it's more like horse riders being convinced to give up their horses in favour of trains: You're travelling faster, don't have to navigate yourself, or think about every boulder on the way; but there are destinations you can't go, overcrowded trains slowing down the journey, hefty ticket prices, and instead of enjoying the freedom, you're degraded to a passive passenger.

hansmayer 4 hours ago | parent [-]

Very funny, this. Did we need forward deployed engineers to convince people that they absolutely need to use the trains in order to "not be left behind"? Or otherwise hype? Or was it sort of obvious and did not need to explained so much - like a bad joke called LLMs ?

9dev 4 hours ago | parent [-]

Actually- absolutely! Initially, people were really afraid of trains, fearing they wouldn’t be able to breathe at those speeds. It took a lot of convincing to establish trust in the technology.

uuyy 3 hours ago | parent [-]

Ever heard of subsidising? :’)

lkjdsklf 2 hours ago | parent | prev | next [-]

> there’s no compelling argument as to why that is the case.

I'm not sure that's true. We've actually seen several open source projects that were vibe coded literally fold up and disappear because they ran into issues that the AI couldn't solve and no one understood them well enough to solve.

There's a reason openai/anthropic and friends are hiring shitloads of software engineers. You still need people that can understand and fix things when the AI goes off hte rails, which happens way more often than any of those companies would like to admit. Sure, "fixing things" often involves having the AI correct itself, but you still have to understand the system enough to know how/when to do that.

caconym_ 4 hours ago | parent | prev | next [-]

I am sure you will feel that this is missing the point of your analogy, but we would not have gotten very far with automobiles if we didn't know how they worked.

throw310822 3 hours ago | parent [-]

You are breaking the analogy because automobiles are machines for transportation, and understanding them is important to make them move. LLMs are machines to understand, and well, if they do the understanding you don't need to.

caconym_ 3 hours ago | parent [-]

The thing we're worried about not understanding here is the software the LLMs write, not the LLMs themselves.

The direct analogy to automobiles would be for each automobile to be a oneoff design filled with bad and bizarre decisions, excessively redundant parts, insane routing of wires, lines, ducts, etc., generally poor serviceability, and so on. IMO the big question going forward is whether the consistent availability of LLMs can render these kinds of post-delivery issues moot (they will reliably [catch and] fix problems in the software they wrote before any real damage is caused), or whether human reliance on LLMs and abdication of understanding will just make software worse because LLMs' ability to fix their own mistakes, and the consequences thereof, generally breaks down in the same contexts/complexities where they made those mistakes in the first place.

My own observations are that moderately complex software written in the mode of "vibe coding" or "agentic engineering" tends to regress to barely-functional dogshit as features are piled on, and that once this state is reached, the teams behind it are unable to, or perhaps simply uninterested in, unfuck[ing] it. I have stopped using software that has gone down this path, not because I have some philosophical objection to it, but because it has become _literally unusable_. But you will certainly not catch me claiming to know what the future holds.

jgbuddy 4 hours ago | parent | prev [-]

agreed completely