▲ | leetharris 3 months ago | ||||||||||||||||
A few problems with this approach: 1. It brings everything back to the "average." Any outliers get discarded. For example, someone who is a circus performer plays fetch with their frog. An LLM would think this is an obvious error and correct it to "dog." 2. LLMs want to format everything as internet text which does not align well to natural human speech. 3. Hallucinations still happen at scale, regardless of model quality. We've done a lot of experiments on this at Rev and it's still useful for the right scenario, but not as reliable as you may think. | |||||||||||||||||
▲ | ldenoue 3 months ago | parent | next [-] | ||||||||||||||||
Do you have something to read about your study, experiments? Genuinely interested. Perhaps the prompts can be made to tell the LLM it's specifically handling human speech, not written text? | |||||||||||||||||
▲ | falcor84 3 months ago | parent | prev [-] | ||||||||||||||||
Regarding the frog, I would assume that the way to address this would be to feed the LLM screenshots from the video, if the budget allows. | |||||||||||||||||
|