| ▲ | pixl97 8 hours ago | |
>AI from creating an operating system? Nothing really... Creating a working operating system and understanding all the hardware bugs it could run into is a different story. Simply put when you look at the combined energy expenditure to create something like Windows or Linux and the numbers would likely stagger a person, like hundreds of gigawatts, hell probably terrawatts. This entropy expenditure is reduced by us sharing the code. This is the same reason we don't have that many top end AI models. The amount of energy you need to spend for one is massive. Intelligence doesn't mean you should do everything yourself. Sharing and stealing are solutions used in the animal kingdom as alternate solutions to the limited fuel problem. | ||
| ▲ | sodafountan 8 hours ago | parent [-] | |
Hardware bugs can be documented for an LLM to learn from; it's really just a chicken-and-egg problem. There are plenty of open-source, working operating systems for LLMs to learn from as well. And yes, I understand code re-use and distribution are valuable, and that's a good point. Having an LLM generate everything on the fly is definitely energy-intensive, but that hasn't stopped the world from building massive data centers to support it, regardless. I guess the theory of my past few posts would be similar to rolling updates, so using the text editor as an example, you'd prompt the AI agent in the hypothetical OS to open a document, and it would generate a word processor on the fly, referencing the dozens of open source repos for word processors and pushing its own contributions back out into the world for reference by other LLMs - computationally expensive, yes. It would then learn from your behaviors, utilizing the program, and the next time you'd prompt the OS for a word-processor-like feature (I'm imagining an MS-DOS-like prompt), it would iterate on that existing idea or program - less computationally expensive because ideally the bulk of the work is already learned. Perhaps adding new features or key-bindings as it sees fit. I understand that hard-disk space is cheap, and you'd probably want some space to store personal files, but the OS could theoretically load your program directly into RAM once it's compiled from AI-generated source code. Removing the need to save programs themselves to disk. Since LLMs are globally distributed, they're learning from all human interactions and are actively developing cutting-edge word processors tailored specifically to the end-users' needs. More of a VIM-style user? The LLM can pick up on that, prefer something more like MS Word? The LLM is learning that too. AIOS slowly becomes geared directly to you, the end-user. That really has nothing to do with intelligence; you're just teaching a computer how to compute, which is what AI is all about. Just some ideas on what the future might hold. | ||