Remix.run Logo
fwsgonzo 9 hours ago

How much work would it be to use the C++ ONNX run-time with this instead of Python? Is it a Claudeable amount of work?

The iOS version is Swift-based.

rohan_joshi 9 hours ago | parent [-]

shouldn't be hard. what backend/hardware are you interested in running this with? i'll add an example for using C++ onnx model. btw check out roadmap, our inference engine will be out 1-2 weeks and it is expected to be faster than onnx.

fwsgonzo 6 hours ago | parent [-]

desktop CPUs running inference on a single background thread would be the ideal case for what I'm considering.