▲ | pedalpete 3 days ago | ||||||||||||||||
I'd love to get a better understanding of the technology this is built with (without sitting through an exceedingly long video). I suspect it's EMG though muscles in the ear and jaw bone, but that seems too rudimentary. The TED talk describes a system which includes sensors on the chin across the jaw bone, but the demo obviously has removed that sensor. | |||||||||||||||||
▲ | fxwin 3 days ago | parent | next [-] | ||||||||||||||||
i think this is what you're looking for: https://www.media.mit.edu/projects/alterego/publications/ | |||||||||||||||||
▲ | ilaksh 3 days ago | parent | prev | next [-] | ||||||||||||||||
Maybe they have combined an LLM or something with the speech detection convolution layers or whatever they were doing. Like with JSON schemas constraining the set of available tokens that are used for structured outputs. Except the set of tokens comes from the top 3-5 words that their first analysis/network decided are the most likely. So with that smarter system they can get by with fewer electrodes in a smaller area at the base of the skull where cranial nerves for the face and tongue emerge from the brainstem. | |||||||||||||||||
▲ | jackthetab 3 days ago | parent | prev [-] | ||||||||||||||||
Thirteen minutes is an "exceedingly long video"?! Man, I thought I was jaded complaining about 20 minute videos! :-) I want to know is what are the connected to? A laptop? A AS400? An old Cray they have lying around? I'd think doing the demo while walking would have been de riguer. Anyway, tres cool! | |||||||||||||||||
|