▲ | tines 2 days ago | ||||||||||||||||
So you have to be able to identify a priori what is and isn't an hallucination right? | |||||||||||||||||
▲ | ares623 2 days ago | parent | next [-] | ||||||||||||||||
The oracle problem is solved. Just use an actual oracle. | |||||||||||||||||
▲ | happyPersonR 2 days ago | parent | prev | next [-] | ||||||||||||||||
I guess the real question is how often do you see the same class of hallucination ? For something where you're using an LLM agent/Workflow, and you're running it repeatedly, I could totally see this being worthwhile. | |||||||||||||||||
▲ | makeavish 2 days ago | parent | prev [-] | ||||||||||||||||
Yeah, reading the headline got me excited too. I thought they are going to propose some novel solution or use the recent research by OpenAI on reward function optimization. | |||||||||||||||||
|