| ▲ | agambrahma a day ago | |
Yes, simply for local inference -- not much, native is the obvious choice. The value would be in actor processes, where you can delegate inference without paying the 'copy tax' for crossing the sandbox boundary. So, less "inference engine" and more "Tmux for AI agents" Think pausing, moving, resuming, swapping model backend. I scoped the post to memory architecture, since it was the least obvious part ... will follow up with one about the actor model aspect. | ||
| ▲ | saagarjha a day ago | parent [-] | |
I'm a little confused what an actor process is. To me a process is inherently local? | ||