Remix.run Logo
ziddoap 2 days ago

At least this is a better effort at explaining why you would believe it is AI than the other poster who just says it's AI because they used the word "likely".

I still find it very annoying that in every thread about a blog post there's someone shouting "AI!" because there's an em dash, bullet points, or some common word/saying (e.g. "likely", "crucially", "in conclusion"). It's been more intrusive on my life than actual AI writing has been.

I've been accused of using AI for writing because I have used parenthesis, ellipses, various common words, because I structured a post with bullet points and a conclusion section, etc. It's wildly frustrating.

cyral 2 days ago | parent | next [-]

> because I structured a post with bullet points and a conclusion section

I do understand that this is frustrating, because in the last few months I see posts with these features everywhere. It's especially a problem on reddit, where there are numerous low effort posts in niche subreddits that are overdone with emojis, bolded sections/titles, and em dashes. Not all of these are AI but an overwhelming majority are to the point where if the quality of the content is low (lots of vague sayings), and it exhibits these traits, I can almost say for certain it's AI.

What is also less talked about is now AI models are beginning to write without exhibiting these issues. I've been playing around with GPT 4o and it's deep research feature writes articles that are extremely well written, not exhibiting the traits above or classic telltale AI signs. I also had a friend ask it to write a fictional passage on a character description and the writing was impeccable (which is depressing because it was better than what she wrote). Soon we are not going to have any clue what is real and what isn't.

fragmede 2 days ago | parent | next [-]

The kids ask ChatGPT to rewrite it using the diction of a 9 year old, so it doesn't look like it was AI generated. If you have a big enough corpus of writing, you could use yourself as the input style to emulate. Unfortunately I think we're going to has get over generated vs not as the technology improves. we'll have to judge a work based on its own merits and not use any tells. Quelle horrer!

ziddoap 2 days ago | parent | prev [-]

>What is also less talked about is now AI models are beginning to write without exhibiting these issues.

It will be great when I continue to write the way I have for decades, continuing to be accused of being AI, while actual AI writing exceeds my ability and isn't accused of being AI.

Get me off this ride.

zahlman 2 days ago | parent | prev | next [-]

As someone who "detects" AI frequently: it's often difficult or impossible to explain where the sense comes from. It can be very much a matter of intuition, but of course it's awkward to admit that publicly. I don't fault others for coming up with an overly simple explanation.

buttercraft 2 days ago | parent [-]

How do you know how accurate you are? How do you know when you're wrong?

zahlman 2 days ago | parent | next [-]

If I'm being entirely honest, in the general case I don't.

But I don't particularly care, either. After a couple tries I decided it's better not to point at object examples of suspected LLM text all the time (except e.g. to report it on Stack Overflow, where it's against the rules and where moderators will use actual detection software etc. to try to verify). But I still notice that style of writing instinctively, and it still automatically flips a switch in my brain to approach the content differently. (Of course, even when I'm confident that something was written by a human, I still e.g. try to verify terminal commands with the man pages before following instructions I don't understand.)

Of course, AI writes the way it does for a reason. More worryingly, it increasingly seems like (verifiably) human writers are mimicking the style - like they see so much AI-generated text out there that sounds authoritative, that they start trying to use the same rhetorical techniques in order to gain that same air of authority.

buttercraft 2 days ago | parent [-]

> still notice that style of writing instinctively, and it still automatically flips a switch in my brain

See, this is what worries me. We have unknowable years of instinct, and none of it is tuned for what is happening now.

ifyoubuildit 2 days ago | parent | prev [-]

I think this is an excellent question and one people should be asking themselves frequently. I often get the impression that commenters have not considered this.

For example, whenever someone on the internet makes a claim about "most x", e.g. most people this, most developers that. What does anyone actually know about "most" anything? I think the answer "pretty much nothing".

cyral 2 days ago | parent [-]

Yes, this is an important point. Insert the survivorship bias plane picture that always gets posted when someone makes this mistake on other platforms (Twitter). We can be accurate at detecting poor AI writing attempts, but not know how much AI writing is good enough to go undetected.

numpad0 2 days ago | parent [-]

Someone should run a double blind test app, there was an adversarially crafted one for images and still got 60% or so average accuracy. We all just can glance the data and detect AI generation like how some experts can just let logs run and say something.

numpad0 2 days ago | parent | prev [-]

Exposure to AI output itself triggers and trains rage response in lots of people. Blame AI for it, regular people have no control over.

Asking for cause or thought processes is just asking them to hallucinate. They don't know why, they just know that they saw it and that it deserves hate.