Remix.run Logo
ants_everywhere 4 days ago

It would be cool to have a Hurd project with a verified microkernel like seL4.

AI is getting good enough to help with the verification process and having a hardened kernel would guard a bit better than the current strategy of using containers everywhere.

snvzz 4 days ago | parent | next [-]

>It would be cool to have a Hurd project with a verified microkernel like seL4.

There's Genode[0]. Relative to the hurd, its design is much more advanced and it supports a range of modern microkernels including seL4.

0. https://genode.org/

cmrdporcupine 4 days ago | parent | next [-]

Genode is more an "OS construction kit" than an OS, isn't it?

aidenn0 4 days ago | parent | next [-]

Yes; but the project includes "Sculpt" which is more of an OS.

https://genode.org/download/sculpt

snvzz 4 days ago | parent | prev [-]

It has a distribution, Sculpt[0].

Many stories around using Genode in the Genodians[1] blog.

0. https://genode.org/download/sculpt

1. https://genodians.org/

ants_everywhere 4 days ago | parent | prev [-]

thanks, I hadn't heard of Genode, this looks really cool

butterisgood 4 days ago | parent | prev [-]

I don't know why this got downvoted... Hurd was indeed investigating L4 as an alternative microkernel for some time.

https://www.gnu.org/software/hurd/history/port_to_another_mi...

Neal Walfield was working on a new microkernel as well: https://www.gnu.org/software/hurd/microkernel/viengoos.html

ants_everywhere 4 days ago | parent [-]

I'm aware of that post! I did a video looking at the GNU Hurd and I believe it came up there.

It definitely would not be a trivial amount of work.

Honestly, I think the downvotes were for mentioning AI may have a role in validation. LLMs are increasingly being explored in the theorem prover space, but it's still controversial to talk of them approvingly on some HN threads.

butterisgood 4 days ago | parent [-]

I've worked a fair amount with LLMs from a code generation perspective, and to be honest, I find them often to be better at reading and explaining code to a human than generating good code.

It's an interesting idea to think that LLMs could be used to not only explain the code but test the potentially tricky corner cases.

I'm pretty sure LLMs are here to stay, and we're just going to have to learn the best ways to work with them.