Remix.run Logo
ivape 5 days ago

Are you saying this would be more performant than Apple’s on device LLM/inferencing?

HenryNdubuaku 4 days ago | parent | next [-]

Valid question. Our perspective is that there can be multiple players, there are 7B devices to power, everyone will get a slice.

elpakal 4 days ago | parent | prev [-]

Came here to ask about how they view Apple Foundation Models as a threat.

> guarantees privacy by default, works offline, and doesn't rack up a massive API bill at scale.

I’ve been really interested in on-device ML for most of my career, and now I wonder how valuable these benefits really are. LLM vendor APIs are pretty performant these days, security is security, and with an on-device model you have to provide updates every time a new model comes out.

HenryNdubuaku 4 days ago | parent [-]

You don’t have to bundle the weights as an asset, you can do over-the-air updates, new weights are simply downloaded.

elpakal 3 days ago | parent [-]

Neat, but not really addressing my point. My point is that you still need to roll out changes and LLM ApIs just work.