| ▲ | kgeist 4 hours ago | |
Interesting how times have changed. Back in 2015, the entire Go runtime (already a mature codebase) was rewritten from C to Go semi-automatically: one of the maintainers wrote a C-to-Go conversion tool (for a subset of C they used) so that it compiled and produced identical output, and then the resulting code was manually refactored to make the Go code more idiomatic and optimized. And now you can just ask a language model. The slides: https://go.dev/talks/2015/gogo.slide#3 An interesting similarity: >We had our own C compiler just to compile the runtime. The Bun team maintain their own fork of Zig too | ||
| ▲ | kelnos an hour ago | parent [-] | |
The big difference here is that the C-to-Go tool was presumably deterministic: running it over and over again should produce the exact same result. You can trust that result because the human wrote the conversion tool, understood it, tested it, and worked the bugs out. The LLM is non-deterministic. You could have it independently do the conversion 10 times, and you'd get 10 different results, and some of them might even be wildly different. There's no way to validate that without reviewing it fully, in its entirety, each time. That's not to say the human-written deterministic conversion tool is going to be perfect or infallible. But you can certainly build much more confidence with it than you can with the LLM. | ||