▲ | bayindirh 13 hours ago | ||||||||||||||||
> why isn’t my iPhone actually doing any of this yet? Probably Apple is trying to distill the models so they can run on your phone locally. Remember, most, if not all, of Siri is running on your device. There's no round trip whatsoever for voice processing. Also, for larger models, there will be throwaway VMs per request, so building that infra takes time. | |||||||||||||||||
▲ | lxgr 7 hours ago | parent | next [-] | ||||||||||||||||
They just launched "Private Cloud Compute" with much fanfare to enable server-side LLM processing, so between that and the fact that Siri has been server-based for most of its existence (local processing is fairly new), I don't think that's their main constraint at this point. That said, "Private Cloud Compute" does run on proprietary Apple hardware, so availability might be a concern (assuming they don't want to start charging for it). | |||||||||||||||||
▲ | jonplackett 12 hours ago | parent | prev [-] | ||||||||||||||||
It says there’s 2 models - one local. It’s already released to app developers to use locally I think (it was in the keynote for WWDC). | |||||||||||||||||
|