| ▲ | rtgfhyuj a day ago |
| why would it early stop? examples? |
|
| ▲ | mickeyp 21 hours ago | parent | next [-] |
| Models just naturally arrive at a conclusion that they are done. TODO hints can help, but is not infallible: Claude will stop and happily report there's more work to be done and "you just say the word Mister and I'll continue" --- this is a RL problem where you have to balance the chance of an infinite loop (it keeps thinking there's a little bit more to do when there is not) versus the opposite where it stops short of actual completion. |
| |
| ▲ | wxce 11 hours ago | parent [-] | | > this is a RL problem where you have to balance the chance of an infinite loop (it keeps thinking there's a little bit more to do when there is not) versus the opposite where it stops short of actual completion. Any idea on why the other end of the spectrum is this way -- thinking that it always has something to do? I can think of a pet theory on it stopping early -- that positive tool responses and such bias it towards thinking it's complete (could be extremely wrong) | | |
| ▲ | yencabulator 6 hours ago | parent [-] | | > Any idea on why the other end of the spectrum is this way -- thinking that it always has something to do? Who said anything about "thinking"? Smaller models were notorious for getting stuck repeating a single word over and over, or just "eeeeeee" forever. Larger models only change probabilities, not the fundamental nature of the machine. |
|
|
|
| ▲ | embedding-shape a day ago | parent | prev [-] |
| Not all models are trained with long one-shot task following by themselves, seems many of them prefer closer interactions with the user. You could always add another layer/abstraction above/below to work around it. |
| |
| ▲ | fastball a day ago | parent [-] | | Can't this just be a Ralph Wiggum loop (i.e. while True) | | |
| ▲ | embedding-shape 19 hours ago | parent [-] | | Sure, but I think just about everyone wants the agent to eventually say "done" in one way or another. |
|
|