Remix.run Logo
pzo 4 days ago

Is this using only llama.cpp as inference engine? How is this days support there on NPU and GPU? Not sure if LLM can run on NPU but many models like STT and TTS and vision often can run much faster on Apple NPU

liuliu 3 days ago | parent [-]

You don't need to guess: https://github.com/cactus-compute/cactus/tree/main/cpp