Remix.run Logo
bogzz 2 hours ago

The trade-off is worth it in my opinion when you are in a time crunch to deliver a demo, or are asked to test out an idea for a new feature (also in a time crunch).

The hope being that if the feature were to be kept or the demo fleshed out, developers would need to shape and refactor the project as per newly discovered requirements, or start from scratch having hopefully learnt from the agentic rush.

To me, it always boils down to LLMs being probabilistic models which can do more of the same that has been done thousands of times, but also exhibit emergent reasoning-like properties that allow them to combine patterns sometimes. It's not actual reasoning, it's a facsimile of reasoning. The bigger the models, the better the RLHF and fine-tuning, the more useful they become but my intuition is that they'll always (LLMs) asymptotically try to approach actual reasoning without being able to get there.

So the notion of no-human-brain-in-the-loop programming is to me, a fool's errand. I do obviously hope I am right here, but we'll see. Ultimately you need accountability and for accountability you need human understanding. Trying to move fast without waiting for comprehension to catch up (which would most likely result in alternate, better approaches to solving the problem at hand) increases entropy and pushes problems further down the road.