▲ | wood_spirit 6 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||
As an aside, has anyone else had some big hallucinations with the Gemini meet summaries? Have been using it a week or so and loving the quality of the grammar of the summary etc, but noticed two recurring problems: omitting what was actually the most important point raised, and hallucinating things like “person x suggested y do z” when, really, that is absolutely the last thing x would really suggest! | ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | leetharris 6 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
The Google ASR is one of the worst on the internet. We run benchmarks of the entire industry regularly and the only hyperscaler with a good ASR is Azure. They acquired Nuance for $20b a while ago and they have a solid lead in the cloud space. And to run it on a "free" product they probably use a very tiny, heavily quantized version of their already weak ASR. There's lots and lots of better meeting bots if you don't mind paying or have low usage that works for a free tier. At Rev we give away something like 300 minutes a month. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | hunter2_ 6 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
It can simultaneously be [the last thing x would suggest] and [a conclusion that an uninvolved person tasked with summarizing might mistakenly draw, with slightly higher probability of making this mistake than not making it] and theoretically an LLM attempts to output the latter. The same exact principle applies to missing the most important point. |