Remix.run Logo
jerf 2 days ago

Worse yet, the problems are going to be real.

There's a lifecycle to these hype runs, even when the thing behind the hype is plenty real. We're still in the phase where if you criticize AI you get told you don't "get it", so people are holding back some of their criticisms because they won't be received well. In this case, I'm not talking about the criticisms of the people standing back and taking shots at the tech, I'm talking about the criticisms of those heavily using it.

At some point, the dam will break, and it will become acceptable, if not fashionable, to talk about the real problems the tech is creating. Right now there is only the tiniest trickle from the folk who just don't care how they are perceived, but once it becomes acceptable it'll be a flood.

And there are going to be problems that come from using vast quantities of AI on a code base, especially of the form "created so much code my AI couldn't handle it anymore and neither could any of the humans involved". There's going to need to be a discussion on techniques on how to handle this. There's going to be characteristic problems and solutions.

The thing that really makes this hard to track though is the tech itself is moving faster than this cycle does. But if the exponential curve turns into a sigmoid curve, we're going to start hearing about these problems. If we just get a few more incremental improvements on what we have now, there absolutely are going to be patterns as to how to use AI and some very strong anti-patterns that we'll discover, and there will be consultants, and little companies that will specialize in fixing the problems, and people who propose buzzword solutions and give lots of talks about it and attract an annoying following online, and all that jazz. Unless AI proceeds to the point that it can completely replace a senior engineer from top to bottom, this is inevitable.

jimbokun 14 hours ago | parent | next [-]

Early in my career I would sometimes be told to not worry about making the code “nice” just get it working and move on. I would nod and just write good code like I always did, knowing it didn’t take longer than writing bad code, and would be much easier to modify and extend and fix later.

I feel like there’s a similar vibe coming with vibe coding. Just let the AI generate as much code as it wants, don’t check it because it doesn’t matter because only the LLM will be reading it anyway.

My gut tells me that

1. there will still be reasons for humans to understand the code for a long time,

2. even the LLM will struggle with modifying code last a certain size and complexity without good encapsulation and well thought out system architecture and design.

jerf 13 hours ago | parent [-]

I classify your latter points under "AIs are Finite": https://jerf.org/iri/post/2026/what_value_code_in_ai_era/

simonw 2 days ago | parent | prev [-]

> And there are going to be problems that come from using vast quantities of AI on a code base, especially of the form "created so much code my AI couldn't handle it anymore and neither could any of the humans involved". There's going to need to be a discussion on techniques on how to handle this. There's going to be characteristic problems and solutions.

That's essentially the thing we are calling "cognitive debt".

I have a chapter with one small thing to help address that here - https://simonwillison.net/guides/agentic-engineering-pattern... - but it's a much bigger topic and will require extensive exploration by the whole industry to figure out.

jerf 2 days ago | parent [-]

Yeah, it's hard to even get started until we can go three months without a significant improvement in the AIs. Today's characteristic failures may not be 2027's characteristic failures. Example: Today I'm complaining that the AIs tend not to abstract as often as I'd like, but it's not hard to imagine it flipping until they're all architecture astronauts instead.