Remix.run Logo
ImPrajyoth 3 days ago

That is the endgame.

I think we are moving toward a bilayered compute model: The Cloud: For massive reasoning.

The Local Edge: A small, resilient model that lives on-device and handles the OS loop, privacy, and immediate context.

BrainKernel is my attempt to prototype that Local Edge layer. Its messy right now, but I think the OS of 2030 will definitely have a local LLM baked into the kernel.

hebejebelus 3 days ago | parent [-]

Well, on my Macbook, some of that already exists. In the Shortcuts app you can use the "Use Model" action which offers to run an LLM on apple's cloud, on-device, or other external service (eg ChatGPT). I use this myself already for several actions, like reading emails from my tennis club to put events in my calendar automatically.

Whether or not we'll see it lower down in the system I'm not sure. Honestly I'm not certain of the utility of an autonomous LLM loop in many or most parts of an OS, where (in general) systems have more value the more deterministic they are, but in the user space, who can say.

In any case, I certainly went down a fun rabbit hole thinking about a mesh network of LLM nodes and thin clients in a post-collapse world. In that scenario, I wonder if the utility of LLMs is really worth the complexity versus a kindle-like device with a copy of wikipedia...