| ▲ | eschaton 2 hours ago | |
We don’t need more software, we need the right software implemented better. That’s not something LLMs can possibly give us because they’re fucking pachinko machines. Here’s a hint: Nobody should ever write a CRUD app, because nobody should ever have to write a CRUD app; that’s something that can be generated fully and deterministically (i.e. by a set of locally-executable heuristics, not a goddamn ocean-boiling LLM) from a sufficiently detailed model of the data involved. In the 1970s you could wire up an OS-level forms library to your database schema and then serve literally thousands of users from a system less powerful than the CPU in modern peripheral or storage controller. And in less RAM too. People need to take a look at what was done before in order to truly have a proper degree of shame about how things are being done now. | ||
| ▲ | steve_adams_86 an hour ago | parent [-] | |
> That’s not something LLMs can possibly give us because they’re fucking pachinko machines. I mostly agree, but I do find them useful for fuzzing out tests and finding issues with implementations. I have moved away from larger architectural sketches using LLMs because over larger time scales I no longer find they actually save time, but I do think they're useful for finding ways to improve correctness and safety in code. It isn't the exciting and magical thing AI platforms want people to think it is, and it isn't indispensable, but I like having it handy sometimes. The key is that it still requires an operator who knows something is missing, or that there are still improvements to be made, and how to suss them out. This is far less likely to occur in the hands of people who don't know, in which case I agree that it's essentially a pachinko machine. | ||