Remix.run Logo
bryanrasmussen 10 hours ago

How do you know that this blog post was written by ChatGPT?

solid_fuel 10 hours ago | parent | next [-]

It feels generated to me too. It’s this:

    When you enable the Gemini API (Generative Language API) on a Google Cloud project, existing API keys in that project (including the ones sitting in public JavaScript on your website) can silently gain access to sensitive Gemini endpoints. No warning. No confirmation dialog. No email notification.

Specifically, the last bit - “No warning. No confirmation dialog. No email notification.” Immediately smells like LLM generated text to me. Punchy repetition in a set of 3.

If you scroll through tiktok or instagram you can see the same exact pattern in a lot of LLM generated descriptions.

MrJohz 7 hours ago | parent | next [-]

I think there's a lot more than just that, but I think part of the problem is that you just get an uncanny valley feeling. All of the phrases and rhetorical tricks that these tools use are perfectly valid, but together they feel somehow thin?

That said, some specific things that feel very AI-y are the mostly short, equally-sized paragraphs with occasional punchy one-sentence paragraphs interspersed between them; the use of bold when listing things (and the number of two-element lists); there are a couple of "it's not X, it's Y"-style statements; one paragraph ends with an "they say it's X, but it's actually Y" construct; and even the phrasing of some of the headings.

None of these are necessarily individually tells of AI writing (and I suspect if you look through my own comments and blog posts on various sites, you'd find me using many of the same constructs, because they're all either effective rhetorically, or make the text clearer and easier to understand). But there's something about the concentration of them here that feels like AI - the uncanny valley feeling.

I would put money on this post at least having gone through AI review, if not having been generated by AI from human-written notes. I understand why people do that, but I also think it's a shame that some of the individual colour of people's writing is disappearing from these sorts of blog posts.

tyre 9 hours ago | parent | prev | next [-]

Using threes is common in English writing and speaking. It has an optimal balance of expressiveness (three marking a pattern or breadth; creating momentum) without being overwhelming.

It’s not uncommon, as basic writing advice, to use sets of three for emphasis. That isn’t a signifier of LLM generation, in my opinion.

Gigachad 9 hours ago | parent | next [-]

It's also seemingly the only way ChatGPT knows how to write, while being very uncommon for blogposts beforehand. Of course it's not 100% proof, but it's the most likely explanation.

WalterGR 9 hours ago | parent [-]

It has a name. The Rule of Threes. https://en.wikipedia.org/wiki/Rule_of_three_(writing)

“The rule of three is a writing principle which suggests that a trio of entities such as events or characters is more satisfying, effective, or humorous than other numbers, hence also more memorable, because it combines both brevity and rhythm with the smallest amount of information needed to create a pattern.”

It’s how I was taught to write, but I understand that my personal experience can’t be generalized to make sweeping statements.

Do you have data that suggests it’s uncommon in human-authored blog posts and more common in LLM-generated text?

palmotea 8 hours ago | parent [-]

> It has a name. The Rule of Threes. https://en.wikipedia.org/wiki/Rule_of_three_(writing)

I don't think that's exactly it.

Speaking of LLM-writing in general, it seems to greatly overuse certain types of constructions or use them in uncommon contexts. So that probably isn't so much using the rule of threes, but overusing the rule of threes in certain specific ways in certain specific contexts.

WalterGR 8 hours ago | parent [-]

I don’t necessarily doubt you or the grand-parent comment, but if it’s ‘obvious to even the most casual of observers’ (as my father would say) then it should be easy to have hard data.

coliveira 9 hours ago | parent | prev [-]

This excerpt is demonstrating the use of a literary technique to write non-literary prose. It's an almost sure sign that an LLM is generating the text.

masklinn 9 hours ago | parent [-]

Of course, how could a writer writing have writing chops and use writing techniques? It boggles the mind that anyone thinks that would ever happens. Must have been aliens.

saagarjha 8 hours ago | parent [-]

A good writer knows when to use literary techniques.

Dylan16807 5 hours ago | parent [-]

They work just fine in this post.

larusso 9 hours ago | parent | prev | next [-]

I’m not a native speaker so my level of AI recognition is already low. I find it very interesting what patters people bring up to declare it’s AI. The 3 punchline one for instance is a pattern I use while speaking. Can’t say I would write like this though.

solid_fuel 9 hours ago | parent | next [-]

It's not so much the grouping of 3 or way it's supposed to be punchy specifically that's the problem, that is just one example of what gives the article the "LLM Generated" feeling since whatever cheap model people are using for this kind of spam has some common ticks.

I use groupings of 3 and try to make things punchy myself sometimes, especially when I'm writing something intended to sway others. I think the problem with this article is the way it feels like the perfect average of corporate writing. It's sort of like the "written by committee" feel that incredibly generic pop music often has.

When I write things, I often go back and edit and reword parts. Like the brushstrokes in an oil painting, the flow of thought varies between paragraphs and even sentences. LLMs only generate things from left to right (or vice versa in RTL languages, I presume). I think that gives LLM generated text a "smooth" texture that really stands out to anyone who reads a lot.

nimonian 9 hours ago | parent [-]

I completely agree with you. There's something conspicuous about this particular use of the "group of three" device. It's trying but it's goofy and conspicuous. I think it's not human, it's 52 trillion parameters in a trenchcoat.

deaux 7 hours ago | parent | prev | next [-]

I'm not a native speaker and my level of AI recognition is higher than 99.999% of native speakers - and I'd be happy to be tested on it for proof.

The biggest factor is simply how long you've been using LLMs to generate text, how often, how much. It's like how an experienced UI designer can instantly tell that something is off by a single pixel off upon first seeing a UI, whereas if you gave me $200 to find it within 10 minutes I might well fail.

Gigachad 9 hours ago | parent | prev [-]

Aside from particulars like the set of 3, LLMs add a lot of emotive language which doesn't mean anything or is a repetition of already established points. Since they can't add any actual substance beyond what was in the prompt, the only thing they do is pad the prompt with filler language.

bryanrasmussen 9 hours ago | parent | prev [-]

OK I've seen many people make this point on this site over just the last few months, but where do you think LLMs pick up these patterns? How did this rule of threes https://en.wikipedia.org/wiki/Rule_of_three_(writing) get into the LLM so they are so damn recognizable as LLMs and not as humans?

HN Note: Yes the rule of threes is broader than just this particular pattern here, but in my opinion this common writing and communication pattern is a specific example of the rule of threes.

Punchy repetition in a set of 3. Yes. LLMs are able to capably mimic the common patterns that how to write books have suggested for the last 100 years as ways to make your writing more "impactful" and attention-grabbing. So are humans. They learned it from watching us.

I am a little bit worked up on this as I have felt insulted a couple times at having something I've written been accused of being by an LLM, in that case it was because I had written something from the viewpoint of a depressed and tired character and someone thought it had to be an LLM because they seemed detached from humanity! Success!

I too would like to be able to reliably detect when something has been written by an LLM so I can discount it out of hand, but frankly many of the attempts I see people make to detect these things seem poorly reasoned and actively detrimental.

People have learned in classes and from reading how to improve their writing. LLMs have learned from ingesting our output. If something matches a common writing 101 tip it is just as likely to be reasonably competent as it is to be non-human. The solution to escape being labelled an LLM is not to become less competent as a writer.

I have been overly verbose here, as I am somewhat worked up and angry and it is too late in the morning to go back to sleep but really too early to be awake. I know verbosity is also a symptom of being an LLM, but not giving a damn is a symptom of humanity.

kgeist 9 hours ago | parent [-]

>but where do you think LLMs pick up these patterns?

>LLMs are able to capably mimic the common patterns that how to write books have suggested for the last 100 years as ways to make your writing more "impactful" and attention-grabbing. So are humans. They learned it from watching us.

Don't forget that LLMs (at least the "instruct" versions) undergo substantial post-training to align them with the authors' objectives, so they are not a 100% pure reflection of the distribution seen on the internet. For example, it's common for LLMs to respond with "You're absolutely right!" to every second message, which isn't what humans usually do. It's a result of some kind of RLHF: human labelers liked to hear that they're right, so they preferred answers containing such phrases, and those responses became amplified. People recognize LLM-generated writing because LLMs' pattern distribution is different from the actual pattern distribution found in articles written by humans.

raincole 9 hours ago | parent | prev | next [-]

It's too well structured and the message is too clear. HN (and the whole internet) is allergic to proper writing. We praise human sloppiness now.

No, I'm not being sarcastic. People have given up em-dash, which is an official punctuation you use in proper writing. And it's all a downhill from there.

palmotea 8 hours ago | parent | next [-]

> It's too well structured and the message is too clean. HN (and the whole internet) is allergic to proper writing. We praise human sloppiness now.

Yes. And it's only a matter of time that the model companies start to try to train in that "human sloppiness." After all, a lot of their customers want machines that can pass for humans.

> No, I'm not being sarcastic. People have given up em-dash, which is an official punctuation you use in proper writing. And it's all a downhill from there.

I wouldn't be surprised if the internet language of people devolves into a weird constantly-changing mish-mash of slang and linguistic fads. Basically an arms race where people constantly innovate in order to stay distinct from the latest models.

But the end result of that would be probably fragmentation, isolation, and a kind of dark ages. Different communities would have different slang, and that slang would change so fast that old text would quickly become hard to understand.

oasisbob 4 hours ago | parent | prev [-]

Strongly disagree. The post is really poorly structured and circles the drain a few times getting to the thesis.

The issues of style are annoying, but I find it much worse to wade through these 3000 word posts which are far longer than they need to be just because they're so damn cheap to compose.

oasisbob 4 hours ago | parent | prev | next [-]

It's far longer than it needs to be because the writing process was too cheap.

5 hours ago | parent | prev | next [-]
[deleted]
SecretDreams 10 hours ago | parent | prev | next [-]

It's too structured and consistent. Imo. Has that AI smell to it, but I guess humans will eventually also start writing more like the AIs they learn from.

Dylan16807 5 hours ago | parent | next [-]

This is the first time I've seen people accuse AI text of being "too structured and consistent" compared to human text. Usually it's about specific patterns or tons of repetition or outright mistakes.

roywiggins an hour ago | parent | next [-]

One example of being "too structured" is that LLMs love an explicit introduction and conclusion even when one that isn't really warranted. It's always telling you what it's going to say, and what it just said.

SecretDreams 2 hours ago | parent | prev [-]

Patterns = consistent?

Hnrobert42 10 hours ago | parent | prev | next [-]

AI was trained on human writing.

palmotea 8 hours ago | parent | next [-]

> AI was trained on human writing.

AI output is not varied like real human writing. This is a very distinctive narrowing of style.

SecretDreams 10 hours ago | parent | prev [-]

And now humans are trained on AI writing.

Like what happens to YouTube videos that go through the compression algorithm 20 times.

devsda 10 hours ago | parent | prev [-]

> guess humans will eventually also start writing more like the AIs they learn from.

With the AI feedback loop being so fast and tight for some tasks, the focus moves on to delivery than learning. There is no incentive, space or time for learning.

OakNinja 8 hours ago | parent | next [-]

For me personally, both at work and in my free time, I spend _more_ time on writing things _that matter_ since I’ve freed up time by using LLM’s for boilerplate tasks.

My motto is - If it wasn’t worth writing, it won’t be worth reading.

A good example of writing where I’d recommend using LLM’s is product documentation. You pass the diff, the description of the task, and the context (existing documentation) with a prompt ”Update the documentation…”.

Documentation is important but it’s not prose. However, writing a comment on hacker news is.

bpodgursky 10 hours ago | parent | prev [-]

Won't be well received here, but this is the truth.

bpodgursky 10 hours ago | parent | prev | next [-]

> The Core Problem

> What You Should Do Right Now

> Bonus: Scan with TruffleHog.

> TruffleHog will verify whether discovered keys are live and have Gemini access, so you'll know exactly which keys are exposed and active, not just which ones match a regular expression.

I don't know exactly, but I'm sure. The cadence, the clarity, the bolding, the italics, it's all just crisp and clean structured and actionable in a way that a meandering human would not distill it down to.

roywiggins an hour ago | parent | next [-]

I've reached the point where if any blog post has a subheading with some variant of "The Problem", I assume it's been edited with an LLM, because it co-locates with other indicators so strongly.

cyral 10 hours ago | parent | prev [-]

Yup, it was actually an interesting article but there are a few telltale parts that sound like every AI spam post on /r/webdev and similar. "No warning. No confirmation dialog. No email notification." is another. The three negatives repeated is present in so many AI generated promotional posts.

bpodgursky 9 hours ago | parent [-]

I don't even have a problem with the content itself, I think frankly the smell is that it's too good. It's just fascinating in the sense that it's one LLM attacking another LLM.

deaux 7 hours ago | parent | prev | next [-]

The fact that according to this reply section most of HN can't tell means that predictably, all hope is lost and there's no point in writing anything by hand any more if you're in it for money/engagement.

While writing this I suddenly realized that marketers and writers probably do a better job at recognizing it than developers and engineers, so maybe all hope isn't.

For those who want to know the tells: overall cadence and frequency of patterns - especially infrequency of patterns - are the biggest ones. And that means that we can't actually give you the best tells, because they're more about what is absent than what is present. What's absent is a single sentence pattern that falls completely out of the LLM go-toes. Anything human written has at most a good mix of both. LLM-written text just entirely lacks it. Humans do use the LLM-preferred patterns, but not for every single sentence. But anyway, here we go.

> Transparently, the initial triage was frustrating; the report was dismissed as "Intended Behavior”. But after providing concrete evidence from Google's own infrastructure, the GCP VDP team took the issue seriously.

^ Fun fact - The ";" would've originally been an em-dash but was either rewritten or a rule was included for this.

> Then Gemini arrived.

^ Dramatic short sentences, a pattern with magnitudes higher LLM-frequency than human frequency, but hasn't reached the public conscious yet a la "not just X but Y".

> No warning. No confirmation dialog. No email notification.

^ Another such pattern. Not just because it's three of them, but also because of the content and repetition. Humans rarely write like that because it again sounds overly dramatic. It's something you see in fiction rather than a technical writeup. In a thriller.

> Retroactive Privilege Expansion. You created a Maps key three years ago and embedded it in your website's source code, exactly as Google instructed. Last month, a developer on your team enabled the Gemini API for an internal prototype. Your public Maps key is now a Gemini credential. Anyone who scrapes it can access your uploaded files, cached content, and rack up your AI bill. Nobody told you.

This style of scenario writing is another one.

> Nobody told you.

Absolute drama queen.

>The UI shows a warning about "unauthorized use," but the architectural default is wide open.

Again.

> The attacker never touches your infrastructure. They just scrape a key from a public webpage.

Again.

> These aren't just hobbyist side projects. The victims included major financial institutions, security companies, global recruiting firms, and, notably, Google itself.

..

> A key that was deployed years ago for a completely benign purpose had silently gained full access to a sensitive API without any developer intervention.

Surprised it hasn't gained consciousness by now. Maybe that's a future plot point.

Here's a great example to train your skills on, because it's rare in that the ratio of "human : straight from LLM" increased gradually as the article goes on: https://www.wallstreetraider.com/story.html

It started at heavy human editing (or just human-written), but less and less towards the end.

The author confirmed this upon pointing it out, FWIW [0].

[0] https://news.ycombinator.com/item?id=47013150

jibal 9 hours ago | parent | prev [-]

They don't. Many of these claims are due to illiteracy.

Someone is complaining that

> it's all just crisp and clean structured and actionable in a way that a meandering human would not distill it down to.

but this is a security report ... people intentionally write such things carefully and crisply with multiple edits and reviews.