▲ | teaearlgraycold 4 days ago | |||||||
Does this download models at runtime? I would have expected a different API for that. I understand that you don’t want to include a multi-gig model in your app. But the mobile flow is usually to block functionality with a progress bar on first run. Downloading inline doesn’t integrate well into that. You’d want an API for downloading OR pulling from a cache. Return an identifier from that and plug it into the inference API. | ||||||||
▲ | rshemet 4 days ago | parent [-] | |||||||
Very good point - we've heard this before. We're restructuring the model initialization API to point to a local file & exposing a separate abstracted download function that takes in a URL. wrt downloading post-install: based on our feedback, this is indeed a preferred pattern (as opposed to bundling in large files). We'll update the download API, thanks again. | ||||||||
|