| ▲ | PostmarketOS in 2026-02: generic kernels, bans use of generative AI(postmarketos.org) |
| 64 points by pantalaimon 3 hours ago | 68 comments |
| |
|
| ▲ | erelong 6 minutes ago | parent | next [-] |
| This sounds impractical and like they will probably not keep the ban AI use should be able to accelerate the development of ports on currently unsupported or undersupported devices which would directly support the project I guess I wouldn't worry about the policy, they will probably naturally switch it if / when AI becomes more useful in practice |
|
| ▲ | jonathrg 2 hours ago | parent | prev | next [-] |
| Very happy to see PostmarketOS take an uncompromising stance and also providing justification for it. |
| |
| ▲ | fartfeatures 27 minutes ago | parent | next [-] | | Feels pretty Luddite to me. I remember when people were crying about how much power a google search uses. This is the same thing all over again and it is as pointless now as it was back then. https://arstechnica.com/ai/2025/08/google-says-it-dropped-th... > Google says it dropped the energy cost of AI queries by 33x in one year. The company claims that a text query now burns the equivalent of 9 seconds of TV. | | |
| ▲ | kruffalon 2 minutes ago | parent | next [-] | | The audacity to call an organisation that works on making mobile phones and other small PCs work with free software Luddite is impressive. That's like calling a person going for seconds a conservative (in the USA political sense). | |
| ▲ | idiotsecant 17 minutes ago | parent | prev [-] | | No, it's entirely justified when quality of code matters. They don't want a thousand gallons of unreviewable slop. They want a reasonable amount of code that can be sensibility reviewed. | | |
| ▲ | fartfeatures 8 minutes ago | parent | next [-] | | There are ways to achieve that without a blanket ban, if you read their AI policy it seems more "ethically" motivated. They certainly address this first, with many more words and 7 references. They do go on to address code quality but it is more of an after thought with 0 references, less words and appears lower down the page. The timing is also suspicious, shortly after publication of this report: https://www.reuters.com/business/media-telecom/smartphone-ma... which forecasts declining smartphone sales meaning less devices for this OS to run on. | |
| ▲ | UqWBcuFx6NV4r 8 minutes ago | parent | prev [-] | | Please tell me more about how you used GPT-3 a few years ago and haven’t stopped blabbing about how bad it is ever since. If you’re unable to look at a PR for a few minutes and glean that it’s not worth looking at, then that’s entirely a skill issue on your part. Don’t blame everyone else for what’s very clearly your own shortcoming. If you’re finding a PR to be unreviewable, then reject it because it’s unreviewable, not by backwards-engineering some BS rule which results in you trying to control how people write their code. I am completely confident that I could put together some LLM-assisted code that you could t distinguish from something else hand-written, with still enough LLM assistance to have been meaningfully beneficial. There are many valid critiques of LLMs, but the whole “it’s banned because of code quality” approach is BS. This decision was clearly one rooted in the silly AI culture war. This is all completely ignoring that the rule is completely unenforceable to begin with. It’s like the chuds on twitter talking about how they can “always tell” that someone is trans. It’s logically flawed. |
|
| |
| ▲ | GaryBluto 30 minutes ago | parent | prev | next [-] | | You say "uncompromising stance" with "justification", I say stubborn prejudice. They simply state the same weak, nonsensical complaints that apply to many other technologies that they undoubtedly don't have issues with and are happy with the use of. | |
| ▲ | LaSombra 44 minutes ago | parent | prev [-] | | I wish more projects would take the same stance. | | |
| ▲ | UqWBcuFx6NV4r 7 minutes ago | parent [-] | | There is no shortage of grumpy neckbeards on mailing lists that are taking exactly this stance. Stop acting like you’re persecuted. |
|
|
|
| ▲ | baq 29 minutes ago | parent | prev | next [-] |
| > bans use of generative AI that ship has sailed with codex 5.3 in 90% SWE jobs, unfortunately. I expect the next 9% won't survive the following 12 months and the last 1% is done within 5 years. it isn't even about principles - projects not using gen AI will become basically irrelevant, the pace of gen AI allowed competitors will be too great. |
| |
| ▲ | ZenoArrow 15 minutes ago | parent | next [-] | | Alright, let's see Codex 5.3 create a competitor to postmarketOS (without just copying the homework of other devs). If you believe in the technology so much, put it to the test, see what it can really do. | | |
| ▲ | fartfeatures 3 minutes ago | parent | next [-] | | Fun that you had to caveat it with some hand wavy homework bull. Gives you a nice get out of jail free clause when inevitably an AI writes an OS. | |
| ▲ | dist-epoch 8 minutes ago | parent | prev [-] | | Reminds me how one year ago people were saying "sure, GPT-4o can write a function, but try to make it write a whole application" | | |
| ▲ | ZenoArrow a minute ago | parent [-] | | Sure, AI has developed quickly, but let's see it take on a real engineering challenge, rather than regurgitating boilerplate code. Writing device drivers from incomplete specs is much harder than "writing a whole application" where the specs are clearly defined and there's a lot more example code to reference. If you believe in AI so much, and believe that it's unreasonable for postmarketOS to not want to use it, put it to the test, prove the doubters wrong, what have you got to lose? |
|
| |
| ▲ | surajrmal 10 minutes ago | parent | prev | next [-] | | This stat is grossly inflated. I don't disagree with the general point but adoption isn't that high yet and certainly not for codex specifically. | |
| ▲ | dist-epoch 9 minutes ago | parent | prev [-] | | sure, but how do you make irrelevant something which is already irrelevant (PostmarketOS)? |
|
|
| ▲ | chasil 2 hours ago | parent | prev | next [-] |
| I do not understand why Lineage insists on waiting for eBPF back ports when PostmarketOS has a far newer kernel running on the same hardware. |
| |
| ▲ | 9cb14c1ec0 2 hours ago | parent | next [-] | | Core Android functionality relies on eBPF in a way that PostmarketOS does not. PostmarketOS is much more of a linux distro than Android is. They are not very comparable. | |
| ▲ | zozbot234 an hour ago | parent | prev [-] | | AOSP patched kernels still include some features that are not in the mainline version. The LineageOS folks are working on support for mainline kernels, but AIUI it's not there yet. |
|
|
| ▲ | egorfine 40 minutes ago | parent | prev | next [-] |
| > Submitting contributions fully or in part created by generative AI tools to postmarketOS. So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all. How it is possible to distinguish between the two in the vast majority of cases where the hand written code and autocompleted code is byte-by-byte identical? Are we supposed to record video of us coding to show that we did type letters one by one? > 2. Recommending generative AI tools to other community members for solving problems in the postmarketOS space. Is searching for pieces of code considered parts of solving problems? Then how do we distinguish between finding a a required function by grepping code or by asking LLM to search for it? Can we ask LLM questions about postmarketOS? Like, "what is the proper way to query kernel for X given Z"? If a community members asks this question and I already know the answer via LLM, then am I now banned from giving the correct answer? -- Don't get me wrong. I am sick and tired of the vomit-inducing AI bullshit (as opposed to the tremendous help that LLMs provide to experienced devs). I fail to see how a policy like this is even enforceable let alone productive and sane. On the other hand, I absolutely see where is this policy coming from. It seems that projects are having a hard time navigating the issue and looking for ways to eliminate the insurmountable amount of incoming slop. I think we still haven't found a right way to do it. |
| |
| ▲ | kunai 30 minutes ago | parent [-] | | > So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all. Because autocomplete still requires heavy user input and a SWE at the top of the decision making tree. You could argue that using Claude or Codex enables you to do the same thing, but there's no guarantee someone isn't vibecoding and then not testing adequately to ensure, firstly, that everything can be debugged, and secondly, that it fits in with the broader codebase before they try to merge or PR. Plenty of people use Claude like an autocomplete or to bounce ideas off of, which I think is a great use case. But besides that, using a tool like that in more extreme ways is becoming increasingly normalized and probably not something you want in your codebase if you care about code quality and avoiding pointless bugs. Every time I see a post on HN about some miracle work Claude did it's always been very underwhelming. Wow, it coded a kernel driver for out of date hardware! That doesn't do anything except turn a display on... great. Claude could probably help you write a driver in less time, but it'll only really work well, again, if you're at the top of the hierarchy of decision making and are manually reviewing code. No guarantees of that in the FOSS world because we don't have keyloggers installed on everybody's machine. | | |
| ▲ | egorfine 26 minutes ago | parent [-] | | Fully agree with you on all points. But again: how do we distinguish between manual code input and sophisticated autocomplete? | | |
| ▲ | UqWBcuFx6NV4r 3 minutes ago | parent | next [-] | | You can’t and that’s why the whole thing is dumb. Why not just have rules against….bad code? I can tell the difference between good code and bad LLM-generated code, and I’m no AI fanboy that’s neck deep in the scene. I am a normal software developer. Some people are just grumpy about change, in which case I’d say that they chose the single worst industry to be in. | |
| ▲ | idiotsecant 15 minutes ago | parent | prev | next [-] | | The project is simply saying what they want. If you choose to ignore that for some weird reason congratulations for being a jerk, I guess. | | |
| ▲ | egorfine 9 minutes ago | parent [-] | | Can you confirm that continuing to use autocomplete in a code base against the policy of the project does make the person a jerk? |
| |
| ▲ | aboardRat4 14 minutes ago | parent | prev [-] | | If it's crap then it's ai. If it's okay, then we pretend that is just sophisticated auto complete. | | |
| ▲ | egorfine 11 minutes ago | parent [-] | | It's pretty much obvious but the policy specifically argues against it and stands on moral grounds. |
|
|
|
|
|
| ▲ | mono442 2 hours ago | parent | prev [-] |
| it's not surprising the whole project isn't useful for anything if they don't embrace genai for speeding up the development |
| |
| ▲ | surgical_fire 2 hours ago | parent | next [-] | | Yes, the famously useless PosmarketOS. Why don't you share the list of very useful things you created instead, mono442? | | |
| ▲ | nananana9 an hour ago | parent | next [-] | | Never ask a woman her age or a vibe coder to show you an useful program they've written. | |
| ▲ | mono442 an hour ago | parent | prev [-] | | I don't work on open source stuff but I work at a financial institution and genai has been a huge productivity boost. I can easily write 2x - 5x more code than before genai. | | |
| ▲ | lm28469 an hour ago | parent | next [-] | | Do you bring home 2x-5x more money every month then? Does your company make 2x 5x more profits? The vibecoder paradox, everyone is 10x as productive, no one can show even a 1.2x increase in anything (besides bot generated comments, traffick and other background noise) | |
| ▲ | jsheard an hour ago | parent | prev | next [-] | | And as we all know, more lines of code always produces better results. That's why we call it "technical wealth". | |
| ▲ | hakube an hour ago | parent | prev | next [-] | | Is the software you're working on useful? Care to share the link so we can take a look? | |
| ▲ | qsera an hour ago | parent | prev [-] | | So do you review all that code as well? | | |
|
| |
| ▲ | ForHackernews 2 hours ago | parent | prev | next [-] | | No one is stopping you from vibe-coding a POSIX-compatible mobile OS. | | |
| ▲ | hu3 an hour ago | parent [-] | | Not parent commenter but this is bound to happen. And I highly doubt iOS and Android are free from LLM assisted code at this point. | | |
| ▲ | mpol an hour ago | parent | next [-] | | Could AI write a highly specific camera driver or GPU driver, without any documentation at all? | | |
| ▲ | hu3 32 minutes ago | parent | next [-] | | Probably not and why would it need such constraint? Not even humans can do that. Documentation needs to at least be reverse-engineered and understood before implementation. | |
| ▲ | pantalaimon 41 minutes ago | parent | prev [-] | | I'm sure it could generate a decent device tree |
| |
| ▲ | imadr an hour ago | parent | prev [-] | | Yes and? Let's suppose your statement is 100% true, I genuinely don't see the point of these kinds of comments. Why every time some person/group of people enact an anti-LLM policy in their project, other people feel the personal need to stress how useful LLMs are and how that project is bound to fail if they don't use it? Postmarketos clearly exists and works, EVEN if LLMs were absolutely perfect for speeding up development ten folds, is there any absolute moral necessity to use them? Also isn't this just moving the goalpost that LLM fanatics love to point out? | | |
| ▲ | hu3 33 minutes ago | parent [-] | | I'm pointing out that their expectation of AI-free OS is pointless. Because AI-assisted code is most probably already present in devices they use. And I dare say that even for PostmarktOS: 1) There's no way they can prevent AI-assisted code to reach their codebase. 2) They will most probably change this policy in the future lest other forks/projects outpace them in terms of utility and they get reduced to a carriage in a car world. | | |
| ▲ | raincole 28 minutes ago | parent [-] | | The stance is not to 'prevent AI-assisted code to reach their codebase.' It's not like AI-assisted code is literally poisonous and their codebase dies if touched. The stance is to deter random vibe-coders trying to resume-max by submitting PRs to known open source projects. There are so many of them rn. Hopefully by making it clear (some of) them will realize doing that is just wasting their tokens. | | |
| ▲ | hu3 19 minutes ago | parent [-] | | I understand there's an avalanche of vibe slop PRs. But to be clear their AI instance is as clear-cut as can be. Their instance IS INDEED to "prevent AI-assisted code to reach their codebase". > The following is not allowed in postmarketOS: > Submitting contributions fully or in part created by generative AI tools to postmarketOS. source: https://docs.postmarketos.org/policies-and-processes/develop... | | |
| ▲ | UqWBcuFx6NV4r a minute ago | parent [-] | | I love the idea that people would submit vibe slop PRs if not for the fact that a project has a rule against using AI. Delusional. |
|
|
|
|
|
| |
| ▲ | MonkeyClub 2 hours ago | parent | prev | next [-] | | Whoever needs more slop faster can easily find it elsewhere, if PostmarketOS doesn't want to follow the trend, that's well and good. | |
| ▲ | ACCount37 2 hours ago | parent | prev [-] | | Weird stance to take. I can understand "untested AI-genned code is bad, and thus anything that reeks of AI is going to be scrutinized" - especially given that PostmarketOS deals a lot with kernel drivers for hardware. Notoriously low error margins. But they just had to go out of their way and make it ideological rather than pragmatic. | | |
| ▲ | xantronix 5 minutes ago | parent | next [-] | | The licensure of the code generated by LLMs is not a settled matter in all jurisdictions; this is a very valid pragmatic concern they address. | |
| ▲ | jonathrg 2 hours ago | parent | prev | next [-] | | It's fine for a project to have moral/ideological leanings, it's only weird if you insist that project teams should be entirely amoral. | | |
| ▲ | trollbridge 2 hours ago | parent | next [-] | | The main reason open source projects exist at all is because of people who started them with quite often fringe ideological leanings. Just look at the GNU project. | | |
| ▲ | Joker_vD an hour ago | parent [-] | | And fringe economical leanings, too. Just look at the GNU project: the firmware in printers is still of subpar quality, and GNU didn't really help to change that... and why on Earth would it, anyway? |
| |
| ▲ | Joker_vD an hour ago | parent | prev [-] | | > It's fine for a project to have moral/ideological leanings As long as they align with the correct (i.e. yours) values, of course. When they adopt the wrong values, it's not fine. | | |
| ▲ | debugnik an hour ago | parent | next [-] | | There's still a line between values I disagree with and values that directly attack me as a person. The former is how many of us feel about some of our dependencies and most proprietary software we use, so it's clearly fine to some degree. | |
| ▲ | jonathrg an hour ago | parent | prev [-] | | But it is fine. If I disagree with a project's values I'm not going to contribute to it, and they wouldn't want me there either. |
|
| |
| ▲ | yehoshuapw 2 hours ago | parent | prev [-] | | as a kernel developer, I use LLMs for some tasks, but can say it is not there yet to write real kernel space code | | |
| ▲ | egorfine 37 minutes ago | parent | next [-] | | Absolutely. But at the same time I cannot imagine reverting to code with no help of LLMs. Asking stackoverflow and waiting for hours to get my question closed down instead of asking LLM? No way. | |
| ▲ | crimsonnoodle58 an hour ago | parent | prev | next [-] | | Exactly, you can use it for some tasks. But why "explicitly forbid generative AI". If you use AI to make repetitive tasks less repetitive, and clean up any LLM-ness afterwards, would they notice or care? I find blanket bans inhibitive, and reeks of fear of change, rather than a real substantive stance. | | |
| ▲ | zozbot234 an hour ago | parent | next [-] | | > and clean up any LLM-ness afterwards That never happens. It's actually easier to write the code from scratch and avoid LLMness altogether. | |
| ▲ | jonathrg an hour ago | parent | prev | next [-] | | They explain why in their AI policy. It's an ethical stance. Of course they wouldn't notice if there aren't clear signs of LLM-ness, but that's not the main reason why they forbid it. https://docs.postmarketos.org/policies-and-processes/develop... | | |
| ▲ | crimsonnoodle58 an hour ago | parent [-] | | Thanks for the clarification. Not that I agree with their stance (the exact same could have been said at the start of the industrial revolution) but I respect it nonetheless. | | |
| ▲ | coldpie 16 minutes ago | parent [-] | | > the exact same could have been said at the start of the industrial revolution The pollution caused by said revolution is currently putting humanity at a serious risk of world war and maybe even extinction so... maybe they had a point? I'm not taking a strong stance either way here, but worth thinking about the downsides from the industrial revolution, too. |
|
| |
| ▲ | jsheard an hour ago | parent | prev [-] | | > But why "explicitly forbid generative AI". The AI policy linked from the OP explains why. It's half not wanting to deal with slop, and half ethical concerns which still apply when it's used judiciously. |
| |
| ▲ | ACCount37 2 hours ago | parent | prev [-] | | Same. Having an LLM helps, especially when you're facing a new subsystem you're not familiar with, and trying to understand how things are done there. They still can't do the heavy duty driver work by themselves - but are good enough for basic guidance and boilerplate. | | |
| ▲ | hedora an hour ago | parent | next [-] | | My reading of their AI statement says your kernel contributions are no longer welcome in PostmarketOS, and also, since you're encouraging others in their space to use such tools, you're in violation of their code of conduct. This applies to the person you're replying to too. I think their policy is poorly thought out, and that little good will come of it. At best, it'll cause drama in the project, and discourage useful contributions. It's a shame, since we desperately need an alternative to the phone duopoly. | |
| ▲ | trollbridge 2 hours ago | parent | prev [-] | | Guidance and boilerplate... in other words, documentation. |
|
|
|
|