Remix.run Logo
shortstuffsushi 2 days ago

While a lot of these ideas are touted as "good for the org," in the case of LLMs, it's more like guard rails against something that can't reason things out. That doesn't mean that the practices are bad, but I would much prefer that these LLMs (or some better mechanism) everyone is being pushed to use could actual reason, remember, and improve, so that this sort of guarding wouldn't be a requirement for correct code.

kaffekaka 2 days ago | parent [-]

The things GP listed are fundamentally good practices. If LLMs get so good they don't need even these guardrails, ok great but that is a long way off, and until then I am really happy if the outcome of AI assisted coding is that we humans get better at using these ideas for ourselves.