| ▲ | csomar 20 hours ago | |||||||
Honestly, I think many hallucinations are the LLM way of "moving forward". For example, the LLM will try something, not ask me to test (and it can't test it, itself) and then carry on to say "Oh, this shouldn't work, blabla, I should try this instead. Now that LLMs can run commands themselves, they are able to test and react on feedback. But lacking that, they'll hallucinate things (ie: hallucinate tokens/API keys) | ||||||||
| ▲ | braebo 19 hours ago | parent [-] | |||||||
Refusing to give up is a benchmark optimization technique with unfortunate consequences. | ||||||||
| ||||||||