▲ | leetharris 5 hours ago | ||||||||||||||||
A few problems with this approach: 1. It brings everything back to the "average." Any outliers get discarded. For example, someone who is a circus performer plays fetch with their frog. An LLM would think this is an obvious error and correct it to "dog." 2. LLMs want to format everything as internet text which does not align well to natural human speech. 3. Hallucinations still happen at scale, regardless of model quality. We've done a lot of experiments on this at Rev and it's still useful for the right scenario, but not as reliable as you may think. | |||||||||||||||||
▲ | falcor84 3 hours ago | parent [-] | ||||||||||||||||
Regarding the frog, I would assume that the way to address this would be to feed the LLM screenshots from the video, if the budget allows. | |||||||||||||||||
|