▲ | Cthulhu_ 4 days ago | |
I've seen R&D demos of universal subtitling and translating, in video conferencing, but it doesn't seem to have taken off or it's hidden behind more paywalls. I did suggest that people use good microphones when giving presentations over MS Teams for the purpose of transcriptions, archiving, searchability and AI summarization, but real time translating would be the other use case. That said, I don't believe it would work as smoothly if used in AR, as speaking and reading are two different brain things. Plus, if it's aimed at older people, they likely have sight issues too. To a point this is already possible, just ask people to speak into your phone with e.g. Google Translate or some other text-to-speech engine. But that's awkward, because it's a context switch to a device and the processing time required. | ||
▲ | barnabyjones a day ago | parent [-] | |
I know my folks already watch movies with subtitles for this reason, and I would think sight issues can be calibrated for if the product is a pair of glasses? But idk how AR tech works with e.g. farsighted people who use reading glasses. |