| ▲ | danaris 3 days ago | |
From what I can tell, only Apple even wants to try doing any of the processing on-device. Including parsing the speech. (This may be out-of-date at this point, but I haven't heard of Amazon or Google doing on-device processing for Alexa or Assistant.) So there's no way for them to do anything without sending it off to the datacenter. | ||
| ▲ | miyoji 3 days ago | parent | next [-] | |
> (This may be out-of-date at this point, but I haven't heard of Amazon or Google doing on-device processing for Alexa or Assistant.) It was out of date 6 years ago. "This breakthrough enabled us to create a next generation Assistant that processes speech on-device at nearly zero latency, with transcription that happens in real-time, even when you have no network connection." - Google, 2019 https://blog.google/products/assistant/next-generation-googl... | ||
| ▲ | delecti 3 days ago | parent | prev | next [-] | |
Alexa actually had the option to process all requests locally (on at least some hardware) for the first ~10 years, from launch until earlier this year. The stated reason for removing the feature was generative AI. | ||
| ▲ | foobiekr 2 days ago | parent | prev [-] | |
It's an obvious cost optimization. Make the consumer directly cover the cost of inference and idle inference hardware. | ||