Remix.run Logo
fny an hour ago

It's possible to rely on mouth movements instead of sound. I've been tweaking visual speech recognition models (VSR) for the past few weeks so that I can "talk" to my agents at the office without pissing everyone off. It works okay. Limiting language to "move this" "clear that" along side context cues vastly simplifies the problem and makes it far more possible on device.

I think its brilliant UX.

makeitdouble 11 minutes ago | parent [-]

No UX needs to be perfect for everyone, but this doesn't sound trivial to make reliable.

First things that came to mind:

  - facial hair
  - getting people to learn to make bigger mouth movements and not mumble
  - we're constantly self-correcting our speech as we hear our voice. This removes the feedback loop.
  - non english languages (god forbid bilingualism)
  - camera angles and head movement
And that thinking about it for 30s. I'm sure there are some really good use cases, but will any research group/company push through for years and years to make it really good even if the response is luck warm ?