Remix.run Logo
kbouw 14 hours ago

You would be correct. Ran the article through GPTZero, 100% AI.

subscribed 13 hours ago | parent | next [-]

These detectors are a scam falsely flagging non-native English speakers: https://plagiarismcheckerai.app/ai-detector-false-positives-...

At this point relying on their judgement is beyond folly.

13 hours ago | parent | next [-]
[deleted]
cubefox 13 hours ago | parent | prev [-]

It's both ironic an confusing that this website itself promotes an AI detector.

xd1936 14 hours ago | parent | prev | next [-]

https://redd.it/13mft8s

rationalist 13 hours ago | parent [-]

user-friendly Old reddit link:

https://old.reddit.com/r/ChatGPT/comments/13mft8s/apparently...

71bw 14 hours ago | parent | prev | next [-]

Would not trust any of these tools in the slightest.

devmor 13 hours ago | parent | prev [-]

AI detectors that use text as a basis are not real. It is fundamentally impossible for them to exist.

HarHarVeryFunny 12 hours ago | parent | next [-]

Huh?

LLM output doesn't have the variety of human output, since they operate in fixed fashion - statistical inference followed by formulaic sampling.

Additionally, the statistics used by LLMs are going be be similar across different LLMs since at scale its just "the statistics of the internet".

Human output has much more variety, partly because we're individuals with our own reading/writing histories (which we're drawing upon when writing), and partly because we're not so formulaic in the way we generate. Individuals have their own writing styles and vocabulary, and one can identify specific authors to a reasonable degree of accuracy based on this.

It's a bit like detecting cheating in a chess tournament. If an unusually high percentage of a player's moves are optimal computer moves, then there is a high likelihood that they were computer generated. Computers and humans don't pick moves in the same way, and humans don't have the computational power to always find "optimal" moves.

Similarly with the "AI detectors" used to detect if kids are using AI to write their homework essays, or to detect if blog posts are AI generated ... if an unusually high percentage of words are predictable by what came before (the way LLMs work), and if those statistics match that of an LLM, then there is an extremely high chance that it was written by an LLM.

Can you ever be 100% sure? Maybe not, but in reality human written text is never going to have such statistical regularity, and such an LLM statistical signature, that an AI detector gives it more than a 10-20% confidence of being AI, so when the detector says it's 80%+ confident something was AI generated, that effectively means 100%. There is of course also content that is part human part AI (human used LLM to fix up their writing), which may score somewhere in the middle.

ben_w 11 hours ago | parent | next [-]

> LLM output doesn't have the variety of human output, since they operate in fixed fashion - statistical inference followed by formulaic sampling.

This is the wrong thing to look at; your chess analogy is much stronger, the detection method similar (if you can figure out a prompt that generates something close to the content, it almost certainly isn't human origin).

But to why the thing I'm quoting doesn't work: If you took, say, web comic author Darren Gav Bleuel, put him in a sci-fi mass duplication incident make 950 million of him, and had them all talking and writing all over the internet, people would very quickly learn to recognise the style, which would have very little variety because they'd all be forks of the same person.

Indeed, LLMs are very good at presenting other styles than their defaults, better at this than most humans, and what gives away LLMs is that (1) very few people bother to ask them to act other than their defaults, and (2) all the different models, being trained in similar ways on similar data with similar architectures, are inherently similar to each other.

HarHarVeryFunny 6 hours ago | parent | next [-]

An LLM is just computer function that predicts next word based on the input you give it. It doesn't make any difference what the input is (e.g. please respond in style X) - the function doesn't change, and the statistical signature of how it works will still be there.

If you don't believe me, try it for yourself. Ask an AI to generate some text and give it to the AI detector below (paste your text, then click on scan). Now ask the AI to generate in a different style and see if it causes the detector to fail.

https://app.gptzero.me/

newsoftheday 10 hours ago | parent | prev [-]

What if the prompt includes, "Produce output that doesn't sound like an AI generated it."?

js8 8 hours ago | parent [-]

I got curious and tried: https://claude.ai/share/3af7bd6a-15f8-4533-9dc3-a44adef255b3

newsoftheday 5 hours ago | parent [-]

That's actually interesting, thanks. It's like AI is tattling on itself.

devmor 4 hours ago | parent | prev | next [-]

A human can easily produce output that looks like anything an LLM can produce, therefor an LLM detector that can say "this is 100% written by AI" cannot exist. It's really that simple.

> Can you ever be 100% sure? Maybe not

The commenter I was replying to claimed exactly this. Their AI detector showed that the text was "100%" AI generated.

goodmythical 11 hours ago | parent | prev [-]

[dead]

watsonL1F7 13 hours ago | parent | prev [-]

[flagged]