▲ | refulgentis 4 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||
[flagged] | ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | HenryNdubuaku 4 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
Thanks for the comment, but: 1) The commit history goes back to April. 2) LlaMa.cpp licence is included in the Repo where necessary like Ollama, until it is deprecated. 3) Flutter isolates behave like servers, and Cactus codes use that. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | rshemet 4 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
reminds me of - "You are, undoubtedly, the worst pirate i have ever heard of" - "Ah, but you have heard of me" Yes, we are indeed a young project. Not two weeks, but a couple of months. Welcome to AI, most projects are young :) Yes, we are wrapping llama.cpp. For now. Ollama too began wrapping llama.cpp. That is the mission of open-source software - to enable the community to build on each others' progress. We're enabling the first cross-platform in-app inference experience for GGUF models and we're soon shipping our own inference kernels fully optimized for mobile to speed up the performance. Stay tuned. PS - we're up to good (source: trust us) |