Remix.run Logo
divvvyy 21 hours ago

Wild tale, but very annoying that he wrote it with an AI. It's horribly jarring to read.

BobAliceInATree 21 hours ago | parent | next [-]

I don't know if he wrote it via AI, but he repeats himself over and over again. It could have been 1/3 the length and still conveyed the same amount of information.

d1sxeyes 20 hours ago | parent [-]

'I don't know if he wrote it via AI, but he repeats himself'.

FinnKuhn 6 hours ago | parent [-]

Some people just aren't good writers.

garbagewoman 6 hours ago | parent [-]

Its AI, he said it himself

FinnKuhn 5 hours ago | parent [-]

That still doesn't mean that something is AI just because a text is repetitive.

Grimblewald 21 hours ago | parent | prev | next [-]

How do you know?

I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.

So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.

gmzamz 21 hours ago | parent | next [-]

Looks like AI to me too. Em dashes (albeit nonstandard) and the ‘it’s not just x, it’s y’ ending phrases were everywhere. Harder to put into words but there’s a sense of grandiosity in the article too.

Not saying the article is bad, it seems pretty good. Just that there are indications

lynndotpy 20 hours ago | parent [-]

It's also strange to suggest readers use ChatGPT or Claude to analyze email headers.

Might as well say "You can tell by the way it is".

jclarkcom 19 hours ago | parent [-]

I don’t understand this comment. I’ve found AI a great tool for identifying red flags in scam emails and wanted to share that.

Grimblewald 7 hours ago | parent | next [-]

I agree with this, my experience is that a small light weight LLM is a fantastic spam filter.

fn-mote 17 hours ago | parent | prev [-]

1. They are all scam emails.

2. AI detecting a scam, sure - it’s a scam. AI saying the email is ok… then what? I’d never trust it.

yuvadam 20 hours ago | parent | prev | next [-]

This blog post isn't human speech, it's typical AI slop. (heh, sorry.)

Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.

Once you notice these micro-patterns, you can't unsee them.

Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?

rdos 5 hours ago | parent [-]

Hello, would you add something to this list? I think it's pretty good

> Over‑polished prose – flawless grammar, overly formal tone, and excessive wordiness.

> Repetitive buzzwords – phrases like “delve into,” “navigate,” “vibrant,” “comprehensive,” etc.

> Lack of perspective shifts – AI usually sticks to a single narrative voice; humans naturally mix first, second, and third person.

> Excessive em‑dashes – AI tends to over‑use them, breaking flow.

> Anodyne, neutral stance – AI avoids strong opinions, trying to please every reader.

> Human writing often contains minor errors, idiosyncratic punctuation, and a more nuanced, opinionated voice.

> It's not just x, it's y

abanana 4 hours ago | parent [-]

Overuse of bold markup, particularly to begin each bullet point.

Overuse of "Here's..." to introduce or further every concept or idea.

A few parts of this article particularly jump out, such as the 2 lists following the "The SMS Flooding Attack" section (which incidentally begins "Here's where..."). A human wouldn't write them as lists (the first list in particular), they'd be normal paragraphs. Short bulleted lists are a good way to get across simple bite-sized pieces of information quickly, but that's in cases where people aren't going to read a large block of text, e.g. in ads. Overusing them in the wrong medium, breaking up a piece of prose like this, just hurts its flow and readability.

stefan_ 18 hours ago | parent | prev | next [-]

Sorry but I think you just don't know a lot about LLMs. Why did they start spamming code with emojis? It's not because that is what people actually do, something that is in the training data. It's because someone reinforcement learned the LLM to do it by asking clueless people if they prefer code with emojis.

And so at this point the excessive bullet points and similar filler trash is also just an expression of whatever stupid people think they prefer.

Maybe I'm being too harsh and it's not the raters are stupid in this constellation, rather it's the ones thinking you could improve the LLM by asking them to make a few very thin judgements.

Grimblewald 7 hours ago | parent [-]

I know the style that most LLM's are mimicking quite well, and I also know people who wrote like that prior to the LLM deluge that is washing over us. The reason people are choosing to make LLMs mimic those behaviours is because it used to be associated with high effort content. The irony is now it si associated with the lowest effort content. The irony is I have stopped proof reading my comments etc. and put zero effort into styling or flow, because right now the only human thing left to do is make low effort content of the like only a human can.

drabbiticus 21 hours ago | parent | prev [-]

Just chiming in here - any time I've written something online that considers things from multiple angles or presents more detailed analysis, the liklihood that someone will ask if I just used ChatGPT go way up. I worry that people have gotten really used to short, easily digestible replies, and conflate that with "human". Because of course it would be crazy for a human to expend "that much effort" on something /s.

EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.

jclarkcom 20 hours ago | parent | next [-]

Author here - yes, this was written using guided AI. I consider this different than giving a vague prompt and telling it to write an article. My process was to provide all the information, for example I used AI to: 1. transcribe the phone call into text using whisper model 2. review all the email correspondence 3. research industry news about the breach 4. brainstorm different topics and blog structures to target based on the information, pick one 5. Review the style of my other blog articles 6. write the article and redact any personal info 7. review the article and suggest iterate on changes multiple times. To me this is more akin to having a writer on staff who can save you a lot of time. I can do all the above in less than 30mins, where it could take a full day to do it manually. I had a blog 20 years ago but since then I never had time to write content again (too time consuming and no ROI) - so the alternative would be nothing.

There are some still some signs you can tell content is AI written based on verbosity, use of bold, specific HTML styling, etc. I see no issues with the approach. I noticed some people have an allergic reaction to any hint of AI, and when the content produced is "fluff" with no real content I get annoyed too - however that isn't the case for all content.

shayway 20 hours ago | parent | next [-]

The issue is that the article is excessively verbose; the time you saved in writing end editing comes at the cost of wasting readers' time. There is nothing wrong with using AI to improve writing, but using it to insert fluff that came at no cost to you and no benefit to me feels like a violation of social contract.

Please, at least put a disclaimer on top so I can ask an AI to summarize the article and complete the cycle of entropy.

jclarkcom 19 hours ago | parent [-]

I have attempted to condense it based on your feedback, and added some more info about email headers.

3rodents 10 hours ago | parent | prev | next [-]

> [...] I can do all the above in less than 30mins, where it could take a full day to do it manually [...]

Generating thousands of words because it's easy is exactly the problem with AI generated content. The people generating AI content think about quantity not quality. If you have to type out the words yourself, if you have to invest the time and energy into writing the post, then you're showing respect for your readers by making the same investment you're asking them to make... and you are creating a natural constraint on the verbosity because you are spending your valuable time.

Just because you can generate 20 hours of output in 30 minutes, doesn't mean you should. I don't really care about whether or not you use AI on principle, if you can generate great content with AI, go for it, but your post is classic AI slop, it's a verbose nightmare, it's words for the sake of words, it's from the quantity over quality school of slop.

> I had a blog 20 years ago but since then I never had time to write content again (too time consuming and no ROI) - so the alternative would be nothing.

Posting nothing is better than posting slop, but you're presenting a false dichotomy. You could have spent the 30 minutes writing the post yourself and posted 30 minutes of output. Or, if you absolutely must use ChatGPT to generate blog posts, ask it to produce something that is a few hundred words at most. Remember the famous quote...

"If I had more time, I would have written a shorter letter."

If ChatGPT can do hundreds of hours of work for you then it should be able to produce the shortest possible blog post, it should be able to produce 100 words that say what you could in 3,000. Not the other way around!

poly2it 11 hours ago | parent | prev | next [-]

Sure, the problem here isn't a lack of veracity in regard to your source material. Many readers are also concerned with the stilicisms and prose of the articles they read. I don't care particularly that the complete article wasn't written by a human. The generic LLM style is however utterly unbearable to me. It is overly sensational and verbose, while lacking normal sized paragraphs of natural text. It's reminiscent of a poor comic except extrapolated to half the stuff which gets posted to HN.

fwip 4 hours ago | parent | prev [-]

If you can't be bothered to spend even an hour writing something up, especially allegations of this magnitude, then chances are you know it's actually not an article with any content worth reading.

Grimblewald 7 hours ago | parent | prev | next [-]

I get you, It grinds my gears. I've been told that I "Talk" like an LLM because I go into detail and give thorough explanations on topics. I'm not easily insulted but that was a first for me. I used to get 'human wikipedia' before, and before that 'walking dicitonary' which I always thought was reductive but it didn't quite irk me as much as being told my entire way of communicating is reminiscent of a bot. So perhaps I take random accusations of LLM use to heart, even if it does seem overwhelmingly likely to be true.

amarant 17 hours ago | parent | prev [-]

You're getting downvoted for being right. Attempt being nuanced and people will call you a robot.

Well if that's how we identify humans I for one prefer our new LLM overlords.

A lot of people who say stuff like "boo AI!" are not only setting the bar for humanity very low, they're also discouraging intellectualism and intelligent discourse online. Honestly, if a LLM wrote a good think piece, I prefer that over "human slop".

I just wish people would critique a text on its own merits instead of inventing strawman arguments about how it was written.

Oh and, for the provocative effect — I'll end my comment with an em dash.

alwa 21 hours ago | parent | prev | next [-]

I know I shouldn’t pile on with respect to the AI Slop Signature Style, but in the hopes of helping people rein in the AI-trash-filter excesses and avoid reactions like these…

The sentence-level stuff was somewhat improved compared to whatever “jaunty Linked-In Voice” prompt people have been using. You know, the one that calls for clipped repetitive phrases, needless rhetorical questions, dimestore mystery framing, faux-casual tone, and some out-of-proportion “moral of the story.” All of that’s better here.

But there’s a good ways left to go still. The endless bullet lists, the “red flags,” the weirdly toothless faux drama (“The Call That Changed Everything”, “Data Catastrophe: The 2025 Cyber Fallout”), and the Frankensteined purposes (“You can still protect yourself from falling victim to the scams that follow,” “The Timeline That Doesn't Make Sense,” etc.)…

The biggest thing that stands out to me here (besides the essay being five different-but-duplicative prompt/response sessions bolted together) are the assertions/conclusions that would mean something if real people drew them, but that don’t follow from the specifics. Consider:

“The Timeline That Doesn't Make Sense

Here's where the story gets interesting—and troubling:

[they made a report, heard back that it was being investigated, didn’t get individual responses to their follow-ups in the immediate days after, the result of the larger investigation was announced 4 months later]”

Disappointing, sure. And definitely frustrating. But like… “doesn’t make sense?” How not so? Is it really surprising or unreasonable that it takes a large organization time, for a major investigation into a foreign contractor, with law enforcement and regulatory implications, as well as 9-figure customer-facing damages? Doesn’t it make sense (even if it’s disappointing), when stuff that serious and complex happens, that they wait until they’re sure before they say something to an individual customer?

I’m not saying it’s good customer service (they could at least drop a reply with “the investigation is ongoing and we can’t comment til it’s done”). There’s lots of words we could use to capture the suckage besides “doesn’t make sense.” My issue is more that the AI presents it as “interesting—and troubling; doesn’t make sense” when those things don’t really follow directly from the bullet list of facts afterward.

Each big categorical that the AI introduced this way just… doesn’t quite match what it purports to describe. I’m not sure exactly how to pin it down, but it’s as if it’s making its judgments entirely without considering the broader context… which I guess is exactly what it’s doing.

gblargg 13 hours ago | parent | prev | next [-]

The page background slowly fades in and out with a blue color. At first I thought my eyes were playing tricks on me.

glitchc 21 hours ago | parent | prev | next [-]

Supporting evidence required.

gnabgib 18 hours ago | parent [-]

https://news.ycombinator.com/item?id=45948625

21 hours ago | parent | prev | next [-]
[deleted]
anonym29 21 hours ago | parent | prev [-]

Many people find whining about coherent, meaningful text based on the source identity to be far more annoying than reading coherent, meaningful text.

But I guess you knew that already, which is why you just made a fresh burner account to whine on rather than whining from your real account.

KomoD 21 hours ago | parent [-]

Coherent? It's really annoying to read.

The post just repeats things over and over again, like the Brett Farmer thing, the "four months", telling us three times that they knew "my BTC balance and SSN" and repeatedly mentioning that it was a Google Voice number.

anonym29 20 hours ago | parent [-]

Almost sounds like the posts of people whining about LLMs.

Of course, unlike those people, LLMs are capable of expressing novel ideas that add meaningful value to diverse conversations beyond loudly and incessantly ensuring everyone in the thread is aware of their objection to new technology they dislike.

lxgr 20 hours ago | parent [-]

LLMs are definitely capable of helping with writing, connecting the dots, and sometimes now of genuine insight. They're also still very capable of producing time-wasting slop.

It's the task of anybody presenting their output to third parties to read (at least without a disclaimer about a given text being unvetted LLM output) to make damn sure it's the former and not the latter.

anonym29 20 hours ago | parent [-]

Thankfully, the 8 millionth post whining about LLMs with zero additional value added to the conversation is far less time-wasting than a detailed blog post about a real-world security incident in a major corporation that isn't being widely covered by other outlets.

The article isn't paywalled. Nobody was forced to read it. Nobody was prohibited from asking an LLM to summarize the article.

Whining about LLM written text is whining about one's own deliberate choice to read an article. There is no implied contract or duty between the author and the people who freely choose to read or not read the author's (free) publication.

It's like walking into a (free) soup kitchen, consuming an entire bowl of free soup, and then whining loudly to everyone else in the room about the soup being too salty.

lxgr 20 hours ago | parent | next [-]

I think the feedback that LLMs were used not very successfully in the making of TFA is valid criticism and might even help other/future authors.

We're probably reading LLM-assisted or even generated texts many times per day at this point, and as long as I don't notice that my time is being wasted by bad writing or hallucinated falsehoods, I'm perfectly fine with it.

fwip 41 minutes ago | parent | prev [-]

Sure, there's no guy with a gun forcing you to read it.

But we're on a site about sharing content for intellectual discussion, right? So when people keep posting the same garbage without labeling it, and you figure it halfway though the article, it's frustrating to find out you wasted your time.

To use your soup analogy: imagine this was a website to share restaurants. You see a cool new Korean place upvoted, so you stop by there for lunch sometime. You sit down, you order, and then ten minutes later, Al comes out with his trademark thin, watery soup again.

In that scenario, it's entirely reasonable to leave a comment, "Ugh, don't bother with this place, it's just Al and his shitty soup again."