| ▲ | rmunn 5 hours ago | |
Software developers don't have to be cargo-culty... if they're working on systems that are well-documented or are open-source (or at least source-available) so that you can actually dig in to find out how the system works. But with LLMs, the internals are not well-documented, most are not open-source (and even if the model and weights are open-source, it's impossible for a human to read a grid of numbers and understand exactly how it will change its output for a given input), and there's also an element of randomness inherent to how the LLM behaves. Given that fact, it's not surprising to find that developers trying to use LLMs end up adding certain inputs out of what amounts to superstition ("it seems to work better when I tell it to think before coding, so let's add that instruction and hopefully it'll help avoid bad code" but there's very little way to be sure that it did anything). It honestly reminds me of gambling fallacies, e.g. tabletop RPG players who have their "lucky" die that they bring out for important rolls. There's insufficient input to be sure that this line, which you add to all your prompts by putting it in AGENTS.md, is doing anything — but it makes you feel better to have it in there. (None of which is intended as a criticism, BTW: that's just what you have to do when using an opaque, partly-random tool). | ||