| ▲ | kgwxd 5 hours ago | |
> There is, for lack of a better word, value in failures Learning? Isn't that what these things are supposedly doing? | ||
| ▲ | simonw 4 hours ago | parent | next [-] | |
LLMs notoriously don't learn anything - they reset to a blank slate every time you start a new conversation. If you want them to learn you have to actively set them up to do that. The simplest mechanism is to use a coding agent tool like Claude Code and frequently remind it to make notes for itself, or to look at its own commit history, or to search for examples in the codebase that is available to it. | ||
| ▲ | the_mitsuhiko 5 hours ago | parent | prev | next [-] | |
If by "these things" you mean large language models: they are not learning. Famously so, that's part of the problem. | ||
| ▲ | mock-possum 3 hours ago | parent | prev [-] | |
No, we’re the ones who are learning. There’s some utility to instructing them to ‘remember’ via writing to CLAUDE.md or similar, and instructing them to ‘recall’ by reading what they wrote later. But they’ll rarely if even do it on their own. | ||