Remix.run Logo
Ghostty's AI Policy(github.com)
137 points by mefengl 3 hours ago | 63 comments
Version467 an hour ago | parent | next [-]

The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.

Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.

monegator 27 minutes ago | parent | next [-]

> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time

Sharlin 16 minutes ago | parent | next [-]

Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.

monooso 21 minutes ago | parent | prev | next [-]

Not OP, but I don't consider these the same thing.

The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.

Aeolun 18 minutes ago | parent | prev [-]

Random people don’t do this. Your boss however…

ionwake 11 minutes ago | parent | prev | next [-]

TBH Im not sure if this is a "growing up in a good area" vibe. But over the last decade or so I have had to slowly learn the people around me have no sense of shame. This wasnt their fault, but mine. Society has changed and if you don't adapt you'll end up confused and abused.

I am not saying one has to lose their shame, but at best, understand it.

flexagoon an hour ago | parent | prev | next [-]

Keep in mind that many people also contribute to big open source projects just because they believe it will look good ok their CV/GitHub and help them get a job. They don't care about helping anyone, they just want to write "contributed to Ghostty" in their application.

0x696C6961 3 minutes ago | parent | next [-]

From my experience, it's not about helping anyone or CV building. I just ran into a bug or a missing feature that is blocking me.

nchmy 28 minutes ago | parent | prev [-]

I think this falls under the "have no shame" comment that they made

Etheryte an hour ago | parent | prev | next [-]

I worked for a major open-source company for half a decade. Everyone thinks their contribution is a gift and you should be grateful. To quote Bo Burnham, "you think your dick is a gift, I promise it's not".

weinzierl 16 minutes ago | parent | prev | next [-]

"The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have."

And this is one half of why I think

"Bad AI drivers will be [..] ridiculed in public."

isn't a good clause. The other is that ridiculing others, not matter what, is just no decent behavior. Putting it as a rule in your policy document makes it only worse.

conartist6 8 minutes ago | parent [-]

Getting to live by the rules of decency is a privilege now denied us. I can accept that but I don't have to like it or like the people who would abuse my trust for their personal gain.

Tit for tat

postepowanieadm 9 minutes ago | parent | prev | next [-]

If you are from poor society you can't afford to have shame. You either succeed or fail, again and again, and keep trying.

Ronsenshi an hour ago | parent | prev | next [-]

It's good to regularly see such policies and discussions around them to remind me how staggeringly shameless some people could be and how many of such people out there. Interacting mostly with my peers, friends, acquaintances I tend to forget that they don't represent average population and after some time I start to assume all people are reasonable and act in good faith.

kleiba an hour ago | parent | prev | next [-]

"Other people" might also just be junior devs - I have seen time and again how (over-)confident newbies can be in their code. (I remember one case where a student suspected a bug in the JVM when some Java code of his caused an error.)

It's not necessarily maliciousness or laziness, it could simply be enthusiasm paired with lack of experience.

xxs 19 minutes ago | parent [-]

have found bugs in native JVM, usually it takes some effort, though. Printing the assembly is the easiest one. (I consider the bug in java.lang/util/io/etc. code not an interesting case)

Memory leaks and issues with the memory allocator are months long process to pin on the JVM...

In the early days (bug parade times), the bugs are a lot more common, nowadays -- I'd say it'd be an extreme naivete to consider JVM the culprit from the get-go.

6LLvveMx2koXfwn 34 minutes ago | parent | prev | next [-]

Shamelessness is very definitely in vogue at the moment. It will pass, let's hope for more than ruins.

Sharlin 25 minutes ago | parent | prev | next [-]

You just have to go take a look at what people write in social media, using their real name and photo, to conclude that no, some people have no shame at all.

arbitrandomuser an hour ago | parent | prev | next [-]

when it comes to enabling opportunities i dont think it becomes a matter of shame for them anymore. A lot of people (especially in regions where living is tough and competition is fierce) will do anything by hook or crook to get ahead in competition. And if github contributions is a metric for getting hired or getting noticed then you are going to see it become spammed.

DrewADesign an hour ago | parent | prev | next [-]

To have that shame, you need to know better. If you don’t know any better, having access to a model that can make code and a cursory understanding of the language syntax probably feels like knowing how to write good code. Dunning-Krueger strikes again.

I’ll bet there are probably also people trying to farm accounts with plausible histories for things like anonymous supply chain attacks.

nobodywillobsrv 7 minutes ago | parent | prev | next [-]

I would imagine there are a lot of "small nice to haves" that people submit because they are frustrated about the mere complexity of submitting changes. Minor things that involve a lot of complexity merely in terms of changing some config or some default etc. Something where there is a significant probability of it being wrong but also a high probability of someone who knows the project being able to quickly see if it's ok or not.

i.e. imagine a change that is literally a small diff, that is easy to describe as a mere user and not a developer, and that requires quite a lot of deep understanding merely to submit as a PR (build the project! run the tests! write the template for the PR!).

Really a lot of this stuff ends up being a kind of failure mode of various projects that we all fall into at some point where "config" is in the code and what could be a simple change and test required a lot of friction.

Obviously not all submissions are going to be like this but I think I've tried a few little ones like that where I would normally just leave whatever annoyance I have alone but think "hey maybe it's 10 min faff with AI and a PR".

The structure of the project incentives kind of creates this. Increasing cost to contribution is a valid strategy of course, but from a holistic project point of view it is not always a good one especially assuming you are not dealing with adversarial contributors but only slightly incompetent ones.

blell 34 minutes ago | parent | prev [-]

It's nothing but cultural expectations. We need to firewall the West off the rest of the world. Not joking.

senko 5 minutes ago | parent | prev | next [-]

[delayed]

evilhackerdude a minute ago | parent | prev | next [-]

sounds reasonable to me. i've been wondering about encoding detailed AI disclosure in an SBOM.

on a related note: i wish we could agree on rebranding the current LLM-driven never-gonna-AGI generation of "AI" to something else… now i'm thinking of when i read the in-game lore definition for VI (Virtual Intelligence) back when i played mass effect 1 ;)

arjunbajaj an hour ago | parent | prev | next [-]

I can see this becoming a pretty generally accepted AI usage policy. Very balanced.

Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.

On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.

imiric an hour ago | parent [-]

I agree with you on the policy being balanced.

However:

> AI generated code does not substitute human thinking, testing, and clean up/rewrite.

Isn't that the end goal of these tools and companies producing them?

According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.

Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.

[1]: https://blog.samaltman.com/the-gentle-singularity

Terretta 37 minutes ago | parent [-]

Intern generated code does not substitute for tech lead thinking, testing, and clean up/rewrite.

alansaber an hour ago | parent | prev | next [-]

"Pull requests created by AI must have been fully verified with human use." should always be a bare minimum requirement.

Lucasoato 40 minutes ago | parent | prev | next [-]

> Bad AI drivers will be banned and ridiculed in public. You've been warned. We love to help junior developers learn and grow, but if you're interested in that then don't use AI, and we'll help you. I'm sorry that bad AI drivers have ruined this for you.

Finally an AI policy I can agree with :) jokes aside, it might sound a bit too agressive but it's also true that some people have really no shame into overloading you with AI generated shit. You need to protect your attention as much as you can, it's becoming the new currency.

weinzierl 11 minutes ago | parent [-]

I don't think ridicule is an effective threat for people with no shame to begin with.

jakozaur 2 hours ago | parent | prev | next [-]

See x thread for rationale: https://x.com/mitchellh/status/2014433315261124760?s=46&t=FU...

“ Ultimately, I want to see full session transcripts, but we don't have enough tool support for that broadly.”

I have a side project, git-prompt-story to attach Claude Vode session in GitHub git notes. Though it is not that simple to do automatic (e.g. i need to redact credentials).

radarsat1 an hour ago | parent | next [-]

I've thought about saving my prompts along with project development and even done it by hand a few times, but eventually I realized I don't really get much value from doing so. Are there good reasons to do it?

simonw an hour ago | parent | next [-]

For me it's increasingly the work. I spend more time in Claude Code going back and forth with the agent than I do in my text editor hacking on the code by hand. Those transcripts ARE the work I've been doing. I want to save them in the same way that I archive my notes and issues and other ephemera around my projects.

My latest attempt at this is https://github.com/simonw/claude-code-transcripts which produces output like the is: https://gisthost.github.io/?c75bf4d827ea4ee3c325625d24c6cd86...

awesan 36 minutes ago | parent | prev | next [-]

If the AI generated most of the code based on these prompts, it's definitely valuable to review the prompts before even looking at the code. Especially in the case where contributions come from a wide range of devs at different experience levels.

At a minimum it will help you to be skeptical at specific parts of the diff so you can look at those more closely in your review. But it can inform test scenarios etc.

fragmede an hour ago | parent | prev [-]

It's not for you. It's so others can see how you arrived to the code that was generated. They can learn better prompting for themselves from it, and also how you think. They can see which cases got considered, or not. All sorts of good stuff that would be helpful for reviewing giant PRs.

Ronsenshi 41 minutes ago | parent [-]

Sounds depressing. First you deal with massive PRs and now also these agent prompts. Soon enough there won't be any coding at all, it seems. Just doomscrolling through massive prompt files and diffs in hopes of understanding what is going on.

optimalsolver an hour ago | parent | prev [-]

>I want to see full session transcripts, but we don't have enough tool support for that broadly

I think AI could help with that.

rikschennink 36 minutes ago | parent | prev | next [-]

> No AI-generated media is allowed (art, images, videos, audio, etc.). Text and code are the only acceptable AI-generated content, per the other rules in this policy.

I find this distinction between media and text/code so interesting. To me it sounds like "text and code" are free from the controversy surrounding AI-generated media.

But judging from how AI companies grabbed all the art, images, videos, and audio they could get their hands on to train their LLMs it's naive to think that they didn't do the same with text and code.

embedding-shape 21 minutes ago | parent [-]

> To me it sounds like "text and code" are free from the controversy surrounding AI-generated media.

It really isn't, don't you recall the "protests" against Microsoft starting to use repositories hosted at GitHub for training their own coding models? Lots of articles and sentiments everywhere at the time.

Seems to have died down though, probably because most developers seemingly at this point use LLMs in some capacity today. Some just use it as a search engine replacement, others to compose snippets they copy-paste and others wholesale don't type code anymore, just instructions then review it.

I'm guessing Ghostty feels like if they'd ban generated text/code, they'd block almost all potential contributors. Not sure I agree with that personally, but I'm guessing that's their perspective.

cranium an hour ago | parent | prev | next [-]

A well crafted policy that, I think, will be adopted by many OSS.

You'd need that kind of sharp rules to compete against unhinged (or drunken) AI drivers and that's unfortunate. But at the same time, letting people DoS maintainers' time at essential no cost is not an option either.

mefengl 3 hours ago | parent | prev | next [-]

If you prefer not to use GitHub: https://gothub.lunar.icu/ghostty-org/ghostty/blob/main/AI_PO...

postepowanieadm 2 hours ago | parent | next [-]

That's really nice - and fast ui!

kleiba an hour ago | parent [-]

It gets even better when you click on "raw", IMO... which is what you also get when clicking on "raw" on Github.

christoph-heiss 2 hours ago | parent | prev [-]

Not sure why you are getting downvoted, given that the original site is such a jarringly user-hostile mess.

embedding-shape an hour ago | parent | next [-]

Without using a random 3rd party, and without the "jarring user-hostile mess":

https://raw.githubusercontent.com/ghostty-org/ghostty/refs/h...

flexagoon an hour ago | parent [-]

This option is pretty unreadable on mobile though

embedding-shape 41 minutes ago | parent [-]

Is it? Just tried it in Safari, Firefox and Chrome on a iPhone 12 Mini and I can read all the text? Obviously it isn't formatted, as it's raw markdown, just like what parent's recommended 3rd party platform does, but nothing is cut off or missing for me.

Actually, trying to load that previous platform on my phone makes it worse for readability, seems there is ~10% less width and not as efficient use of vertical space. Together with both being unformatted markdown, I think the raw GitHub URL seems to render better on mobile, at least small ones like my mini.

user34283 an hour ago | parent | prev [-]

Whatever your opinion on the GitHub UI may be, at least the text formatting of the markdown is working, which can't be said for that alternative site.

nutjob2 an hour ago | parent | prev | next [-]

A factor that people have not considered is that the copyright status of AI generated text is not settled law and precedent or new law may retroactively change the copyright status of a whole project.

Maybe a bit unlikely, but still an issue no one is really considering.

There has been a single ruling (I think) that AI generated code is uncopyrightable. There has been at least one affirmative fair use ruling. Both of these are from the lower courts. I'm still of the opinion that generative AI is not fair use because its clearly substitutive.

101008 11 minutes ago | parent | next [-]

I never thought of this, you are right. What happens if, let's say, AI generated text/code is "ilegal"? Especially what happens with all the companies that have been using it for their products? Do they need to rollback? It should be a shit show but super interesting to see it unfold...

direwolf20 an hour ago | parent | prev [-]

This only matters if you get sued for copyright violation, though.

christoph-heiss 24 minutes ago | parent | next [-]

No? Licenses still apply even if you _don't_ get sued?

consp 22 minutes ago | parent | prev [-]

At what time in the future does this not become an issue?

CrociDB an hour ago | parent | prev | next [-]

I recently had to do a similar policy for my TUI feed reader, after getting some AI slop spammy PRs: https://github.com/CrociDB/bulletty?tab=contributing-ov-file...

The fact that some people will straight up lie after submitting you a PR with lots of _that type_ of comment in the middle of the code is baffling!

kanzure an hour ago | parent | prev | next [-]

Another project simply paused external contributions entirely: https://news.ycombinator.com/item?id=46642012

Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.

lagniappe an hour ago | parent [-]

>people already working on the project would be better at prompting and steering AI outputs.

In an ideal world sure, but I've seen the entire gamut from amateurs making surprising work to experts whose prompt history looks like a comedy of errors and gotchas. There's some "skill" I can't quite put my finger on when it comes to the way you must speak to an LLM vs another dev. There's more monkey-paw involved in the LLM process, in the sense that you get what you want, but do you want what you'll get?

vegabook an hour ago | parent | prev | next [-]

Ultimately what's happening here is AI is undermining trust in remote contributions, and in new code. If you don't know somebody personally, and know how they work, the trust barrier is getting higher. I personally am already ultra vigilant for any github repo that is not already well established, and am even concerned about existing projects' code quality into the future. Not against AI per se (which I use), but it's just going to get harder to fight the slop.

epolanski an hour ago | parent | prev | next [-]

Honestly I don't care how people come with the code they create, but I hold them responsible for what they try to merge.

I work in a team of 5 great professionals, there hasn't been a single instance since Copilot launched in 2022 that anybody, in any single modification did not take full responsibility for what's been committed.

I know we all use it, to different extent and usage, but the quality of what's produced hasn't dipped a single bit, I'd even argue it has improved because LLMs can find answers easier in complex codebases. We started putting `_vendor` directories with our main external dependencies as git subtrees, and it's super useful to find information about those directly in their source code and tests.

It's really as simple. If your teammates are producing slop, that's a human and professional problem and these people should be fired. If you use the tool correctly, it can help you a lot finding information and connecting dots.

Any person with a brain can clearly see the huge benefit of these tools, but also the great danger of not reviewing their output line by line and forfeiting the constant work of resolving design tensions.

Of course, open source is a different beast. The people committing may not be professionals and have no real stakes so they get little to lose by producing slop whereas maintainers are already stretched in their time and attention.

embedding-shape an hour ago | parent | next [-]

> It's really as simple. If you or your teammates are producing slop, that's a human and professional problem and these people should be fired.

Agree, slop isn't "the tool is so easy to use I can't review the code I'm producing", slop is the symptom of "I don't care how it's done, as long as it looks correct", and that's been a problem before LLMs too, the difference is how quickly you reach the "slop" state now, not that you have gate your codebase and reject shit code.

As always, most problems in "software programming" isn't about software nor programming but everything around it, including communication and workflows. If your workflow allows people to not be responsible for what they produce, and if allows shitty code to get into production, then that's on you and your team, not on the tools that the individuals use.

altmanaltman an hour ago | parent | prev [-]

I mean this policy only applies to outside contributors and not the maintainers.

> Ghostty is written with plenty of AI assistance, and many maintainers embrace AI tools as a productive tool in their workflow. As a project, we welcome AI as a tool!

> Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the number of highly unqualified people using AI. It's the people, not the tools, that are the problem.

Basically don't write slop and if you want to contribute as an outsider, ensure your contribution actually is valid and works.

cxrpx 2 hours ago | parent | prev | next [-]

with limited training data that llm generated code must be atrocious

antirez 17 minutes ago | parent | prev [-]

TLDR don't be an asshole and produce good stuff. But I have the feeling that this is not the right direction for the future. Distrust the process: only trust the results.

Moreover this policy is strictly unenforceable because good AI use is indistinguishable from good manual coding. And sometimes even the reverse. I don't believe in coding policies where maintainers need to spot if AI is used or not. I believe in experienced maintainers that are able to tell if a change looks sensible or not.

b3kart 12 minutes ago | parent [-]

This doesn't work in the age of AI where producing crappy results is much cheaper than verifying them. While this is the case, metadata will be important to understand if you should even bother verifying the results.