| ▲ | peacebeard 9 hours ago | |
This is definitely part of it. I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way. | ||
| ▲ | Esophagus4 9 hours ago | parent [-] | |
> AI tools demo really well Yes, and they work really well for small side projects that an exec probably used to try out the LLM. But writing code in one clean discrete repo is (esp. at a large org) only a part of shipping something. Over time, I think tooling will get better at the pieces surrounding writing the code though. But the human coordination / dependency pieces are still tricky to automate. | ||