| ▲ | wxce a day ago | |
> this is a RL problem where you have to balance the chance of an infinite loop (it keeps thinking there's a little bit more to do when there is not) versus the opposite where it stops short of actual completion. Any idea on why the other end of the spectrum is this way -- thinking that it always has something to do? I can think of a pet theory on it stopping early -- that positive tool responses and such bias it towards thinking it's complete (could be extremely wrong) | ||
| ▲ | skybrian 4 hours ago | parent | next [-] | |
My pet theory: LLM's are good at detecting and continuing patterns. Repeating the same thing is a rather simple pattern, and there's no obvious place to stop if an LLM falls into that pattern unintentionally. At least to an unsophisticated LLM, the most likely completion is to continue the pattern. So infinite loops are more of a default, and the question is how to avoid them. Picking randomly (non-zero temperature) helps prevent repetition sometimes. Other higher-level patterns probably prevent this from happening most of the time in more sophisticated LLM's. | ||
| ▲ | yencabulator a day ago | parent | prev [-] | |
> Any idea on why the other end of the spectrum is this way -- thinking that it always has something to do? Who said anything about "thinking"? Smaller models were notorious for getting stuck repeating a single word over and over, or just "eeeeeee" forever. Larger models only change probabilities, not the fundamental nature of the machine. | ||