| ▲ | adampunk an hour ago | ||||||||||||||||||||||||||||||||||||||||||||||
LLMs will make mistakes on every turn. The mistakes will have little to no apparent connection to "difficulty" or what may or may not be prevalent in the training data. They will be mistakes at all levels of operation, from planning to code writing to reporting. Whether those mistakes matter and whether you catch them is mostly up to you. I have yet to find a model that does not make mistakes each turn. I suspect that this kind of error is fundamentally incorrigible. The most interesting thing about LLMs is that despite the above (and its non-determinism) they're still useful. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | simonw 19 minutes ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> I have yet to find a model that does not make mistakes each turn What kind of mistakes are you talking about here? | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | pyrolistical an hour ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
As a human I make typos all the time | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||