Remix.run Logo
barnabyjones 5 days ago

My parents have similar issues due to hearing loss, it really makes any kind of social interaction a chore which results in a similar spiral. For years I've wanted to try to make, or hope someone else would make, a set of AR glasses that's purely focused on providing accurate real-time subtitles, no other gimmicks or features that might affect the wearability/usability. I think that's the biggest QOL boost most old folks would get from a single product, and it seems much more realistically feasible than other potential QOL solutions like robotics, but I wouldn't know where to start with building it. As a bonus, it would just need an LLM/Google Translate hookup to become an amazing travel tool.

copperx 4 days ago | parent | next [-]

Spending R&D in something like this is much more important that building fancier hearing aids. Universal subtitles would be a life changer.

stavros 4 days ago | parent | next [-]

Couldn't you do this with $500 in some Xreal Airs and a mobile phone running Parakeet right now?

barnabyjones a day ago | parent [-]

Those look bulky, to support exactly the kind of feature creep I think will always be a problem with this category. I think there are a LOT of people who would simply not consider wearing them until the form factor is close to normal glasses, but it would be hard to convince any product manager not to expand into videos/games/music/AI/etc.

stavros a day ago | parent [-]

What's the feature creep you see them support?

barnabyjones 12 hours ago | parent [-]

The Xreal front page is people playing racing games and watching movies; I'm imagining something that can be used nonchalantly in public, and I'm assuming every feature beyond a bare minimum speech-to-text display would increase the size.

Cthulhu_ 4 days ago | parent | prev [-]

I've seen R&D demos of universal subtitling and translating, in video conferencing, but it doesn't seem to have taken off or it's hidden behind more paywalls. I did suggest that people use good microphones when giving presentations over MS Teams for the purpose of transcriptions, archiving, searchability and AI summarization, but real time translating would be the other use case.

That said, I don't believe it would work as smoothly if used in AR, as speaking and reading are two different brain things. Plus, if it's aimed at older people, they likely have sight issues too.

To a point this is already possible, just ask people to speak into your phone with e.g. Google Translate or some other text-to-speech engine. But that's awkward, because it's a context switch to a device and the processing time required.

barnabyjones a day ago | parent [-]

I know my folks already watch movies with subtitles for this reason, and I would think sight issues can be calibrated for if the product is a pair of glasses? But idk how AR tech works with e.g. farsighted people who use reading glasses.

peepee1982 4 days ago | parent | prev [-]

I've never thought of this usecase and I think it's fantastic.