| ▲ | rcarmo 9 hours ago |
| As a long time fan, I’m kind of sad this happened and thought I’d start a thread here. I love niche operating systems and want them to thrive, and having more SBCs than I can count I would love to run my own OS builds on them, but not taking advantage of AI to speed up and automate things strikes me as… weird. Is it just me, or are even mainstream tech folk refusing to use AI to take away toil so devs can focus on actual creative work? |
|
| ▲ | defrost 9 hours ago | parent [-] |
| They seem to have a reasonable stance: The official wording is very precise.
If you want to get LLM assisted code upstream in Haiku, you have to do the work to show that your LLM didn’t accidentally generate code that is too similar to something from its training database without attribution, or code that is under a license incompatible with the MIT used in Haiku.
That is, of course, in addition to making sure you fully understand the code you are submitting. I would say this is the same as when you write the code yourself, but it is significantly harder to achieve that when the code is generated and you didn’t carefully think about each line of code when writing it.
Projects of long standing all have their own club rules, often you can play by house rules or fork. |
| |
| ▲ | rcarmo 9 hours ago | parent [-] | | Yeah, well, they didn’t even check that this is still just build automation on a native beefy ARM64 host. Zero haiku code was touched except tool chain fixes. But now I’m set on forking it. I have rcarmo/9front almost booting on the target hardware, and when that works (much faster to flash 100MB images and iterate that way), I’ll port that back to “my” Haiku. |
|