▲ | MrDrMcCoy 5 days ago | |||||||
Question: can this utilize multiple forms of compute at once? Many phones have both GPUs that are capable of doing compute as well as NPUs, and that number will only increase. I'm sure it would be challenging, but that's a lot of performance to leave on the table if it can't do so already. I am very curious what could be done with your impressive optimization on an rk3588, since it has pretty decent bits in all 3 categories, and am now seriously considering a Radxa Orion to play with this on :) One more if you have a moment: will this be limited to text generation, or will it have audio and image capabilities as well? Would be neat to enable not only image generation, but also explore voice recognition, translation, computer vision, as well as image editing and enhancement features in mobile apps beyond what the big players daign to give us :) | ||||||||
▲ | rshemet 4 days ago | parent [-] | |||||||
Yes! Cactus is optimized for mobile CPU inference and we're finishing internal testing of hybrid kernels that use the NPU, as well other chips. We don't advise using GPUs on smartphones, since they're very energy-inefficient. Mobile GPU inference is actually the main driver behind the stereotype that "mobile inference drains your battery and heats up your phone". Wrt to your last question – the short answer is yes, we'll have multimodal support. We currently support voice transcription and image understanding. We'll be expanding these capabilities to add more models, voice synthesis, and much more. | ||||||||
|