| ▲ | queueueue 14 hours ago | |
Ironic that I’m going to give another anecdotal experience here, but I’ve noticed this myself too. I catch myself trying to keep on prompting after an llm has not been able to solve some problem in a specific way. While I can probably do it faster at that point if I switch to doing it fully myself. Maybe because the llm output feels like its ‘almost there’, or some sunken cost fallacy. | ||
| ▲ | qwery 9 hours ago | parent [-] | |
Not saying this is you, but another way to look at it is that engaging in that process is training you (again, not you, the user) -- the way you get results is by asking the chat bot, so that's what you try first. You don't need sunk cost or gambling mechanics, it's just simple conditioning. Press lever --> pellet. Want pellet? --> press lever. Pressed lever but no pellet? --> press lever. | ||