Remix.run Logo
Quillx is an open standard for disclosing AI involvement in software projects(github.com)
19 points by qainsights 6 hours ago | 22 comments
big-chungus4 19 minutes ago | parent | next [-]

The labels are not transparent - if you see this badge on GitHub readme, you won't be able to tell that it is about AI usage. I also don't find those labels to be particularly useful, when you are proposing an actual standard, you have to sit down and design it carefully and thoroughly, which I don't believe happened here. So, it looks cool, but I don't think it's super useful

jannniii an hour ago | parent | prev | next [-]

Nice idea, but the labels are a bit too opinionated for me.

Literally all my code has been ”ghostwritten” for the past 18 months. Does not sound like something enterprise customers would like to hear and try to understand what it means.

wewewedxfgdf an hour ago | parent | prev | next [-]

We should assume projects have AI/LLM development assistance unless stated otherwise.

You may have noticed the absolutely vast array of AI development tools and assistants and IDEs and integrations - this is a reasonable indicator that developers are actually doing AI/LLM development.

crimsonnoodle58 2 hours ago | parent | prev | next [-]

I would think the term 'vibe coded', 'vibed', '100% vibes', etc would be far more appropriate and well known, than 'lorem ipsum' when it comes to generating code without reviewing the output.

If I saw that badge on someones github I would think it had something to do with lorem ipsum text generation, rather than anything to do with AI.

rzmmm 31 minutes ago | parent | prev | next [-]

In academia there has been a widespread practice to simply include a sentence about how AI has used in articles. It's simple and it works well.

varun_ch 3 hours ago | parent | prev | next [-]

A little ironic that the README, SPEC.md and the poster's comment here all smell of LLM writing!

jofzar an hour ago | parent [-]

They put a 3/5 for themselves, atleast they are honest.

hedora 4 hours ago | parent | prev | next [-]

(1) Why?

(2) The code I write with AI doesn’t fit on the scale.

charcircuit 2 hours ago | parent [-]

Considering the more AI you use the more red it is along with demeaning language for the scales I assume it is mainly for anti-AI people to virtue signal about not using it.

retsibsi 39 minutes ago | parent [-]

It looks a bit like that, but they gave their own repo a 3/5 rating and it's full of obvious LLMisms, so I think they're not totally anti-AI and are trying to be evenhanded.

To me, the metaphor doesn't really work, especially at level 5. Lorum Ipsum is literal placeholder text which is basically the same everywhere it's used; I don't see what that has to do with vibe code. (Also the verse/prose thing seems pretty wanky to me, but I admit that's just a matter of taste.)

peteforde 4 hours ago | parent | prev | next [-]

Given the reality that there are a lot of people who [fairly or unfairly] judge anything that uses "AI" in a decisively negative way, what possible advantage is there in giving people a reason to dismiss your project without evaluating it on its own merits?

jimbooonooo 3 hours ago | parent [-]

Is honesty an important quality to you? Does lying by omission concern you for the people and projects you choose to interact with?

retsibsi 31 minutes ago | parent | next [-]

I'm with you on honesty, and I've certainly seen people tacitly trying to pass off AI outputs as human written. But I think we've reached a point where, in lots of contexts, we can't reasonably assume human authorship by default any more. (We can reasonably want it and push for it! I just mean we can't literally expect it.) So even when we would prefer openness, I think 'lying by omission' is too harsh a characterisation for people who choose not to declare AI authorship but don't actively try to cover it up.

the_biot 35 minutes ago | parent | prev | next [-]

Honesty is the whole problem with ideas like this. If you're the kind of deluded idiot that considers LLM-generated crap "your code", stating exactly how little you had to do with it is not in your advantage. Far easier to maintain the lie.

9864247888754 2 hours ago | parent | prev [-]

Nobody owes you any transparency about the way they develop their software.

jimbooonooo 2 hours ago | parent [-]

They sure don't, but often insight into/alignment with the story and development process makes all the difference for which projects people choose to contribute to.

easygenes 2 hours ago | parent | prev | next [-]

This is very similar to a project I created https://github.com/Entrpi/autonomy-golf and have been using as a gamified development process on active projects.

The key insight was to not just handwave or guess at how much is automated, but make evaluation and review part of the continuous development loop. I first implemented in https://github.com/Entrpi/autoresearch-everywhere where I used it to deliberately automate more, in the spirit of Karpathy's upstream (and to very good effect. I have some of the best autoresearch results anywhere, and the platform is far more robust than it started).

lsh0 3 minutes ago | parent [-]

[delayed]

qainsights 6 hours ago | parent | prev | next [-]

AIx is an open standard for disclosing AI involvement in software projects - expressed through the language of authorship. Not a judgment. Just transparency.

rvz 5 hours ago | parent [-]

Just one tiny issue:

"AIX®" is also a registered trademark of International Business Machines Corporation (IBM) and used for the AIX® operating system that is still in use today.

I would be careful to use that name.

qainsights 5 hours ago | parent [-]

Thanks. renamed it to https://github.com/QAInsights/Quillx

pbronez 4 hours ago | parent | prev [-]

Neat idea. I like the five point scale