Remix.run Logo
arglebarnacle 20 hours ago

A really interesting article, and I'm likely to give it a shot a work. I'm grateful for it, and yet I found it difficult to get through because of a sense of "LLM style" in the prose.

I won't speculate on whether the post is AI-written or whether the author has adopted quirks from LLM outputs into their own way of writing because it doesn't really matter. Something about this "feeling" in the writing causes me discomfort, and I don't even really know why. It's almost like a tightness in my jaw or a slight ache in my molars.

Every time I read something like, "Not as an aesthetic choice. Not as nostalgia. *But as a thinking tool*" in an article I had until then taken on faith was produced in the voice of a human being feels like a let down. Maybe it's just the sense that I believed I was connecting with another person, albeit indirectly, and then I feel the loss of that. But that's not entirely convincing, because I genuinely found the points this article was making interesting, and no doubt they came originally from the author's mind.

Since this is happening more and more, I'd be interested to hear what others' experiences with encountering LLM-seeming blog posts (especially of inherently interesting underlying content) has been like.

rmunn 19 hours ago | parent | next [-]

I've had too many LLMs tell me that software product ABC can do XYZ, but when I actually read the ABC documentation I discover that that hallucination was the opposite of reality: the docs say "we cannot do XYZ yet but we're working on it." So for me, the question at the back of my mind when I encounter an obviously LLM-generated article is always, "So which parts of this article are factually correct, and which parts are hallucinations?" I care less about the "human voice" aspect than about the factual correctness of the technical facts presented in the article.

In this particular case, if the facts about how many years ago various products came out are wrong, it doesn't matter since I'm never going to be relying on that fact anyway. The fact that what the author is proposing isn't ASCII, it's UTF-8-encoded Unicode (emojis aren't ASCII) doesn't matter (and I rather suspect that this particular factual error would have been present even if he had written the text entirely by hand with no LLM input), because again, I'm not going to be relying on that fact for anything. The idea he presents is interesting, and is obviously possible.

So I care less about the "voice" of an article, but a LOT about its accuracy.

rmunn 17 hours ago | parent | next [-]

I should add that for me, when it comes to LLMs telling me "facts" that are the opposite of reality, "too many" equals ONE or more.

trollbridge 18 hours ago | parent | prev | next [-]

This is an ongoing problem for those of us who use LLMs every day. I have to check and recheck what it claims is possible.

19 hours ago | parent | prev [-]
[deleted]
roywiggins 17 hours ago | parent | prev | next [-]

I also have this reaction to this type of prose, for better or worse. It's depressing to see so much of it shared. It makes me want to (in a friendly manner!) grab the author and tell them to write in their own voice, damn it.

tom_ 13 hours ago | parent | prev | next [-]

I just give up the moment I notice it. I gave up on this one once I got to "The High Fidelity Trap". My LLMdar said: brrrrrp. (Imagine the sound of a sad trombone, only out of tune.) If I feel like the author couldn't be bothered to write it, I feel like I can't be bothered to read it.

And if I'm wrong: so be it. I'm comfortable living dangerously.

(Reading it again, I probably should have noticed by "But here’s the thing: AI-generated UIs are high-fidelity by default", a couple of sentences previously. And in fact, there's "Deliberately sketchy. Intentionally low-fidelity. The comic-sans-looking wireframes were a feature, not a bug" in the very first paragraph - god, I'm so stupid! Still, each time I get this wrong, I'm that bit more likely to spot it in future.)

Tenobrus 9 hours ago | parent | prev | next [-]

some ai detectors work now. pangram detects this as 57% AI written, and the parts it thinks are human are.... the ascii diagrams / screenshots. all the actual text it detects as generated.

muzani 14 hours ago | parent | prev | next [-]

I stopped reading it at that point. I'm not against AI-written articles; I even think it's a little rude to accuse. But I agree.

I think we do develop "antibodies" against this kind of thing, like listicles, clickbait, and random links that rickroll you. It's the same reason the article isn't titled, "5 examples of ASCII-Driven Development. You'll never guess #2!"

Every article is a little mentor, and the thing with mentors and teachers is you have to trust them blindly, suspend disbelief, etc. But the AI voice also triggers the part of the brain designed to spot scams.

iamanllm 19 hours ago | parent | prev [-]

"Not as an aesthetic choice. Not as nostalgia. But as a thinking tool" is a perfectly normal sentence, and I think there is an equally bad trend of people assuming things are AI written and forget that AI was trained on human writing. But to your point, agreed there is a disconnect when things are in fact written by AI, but I skimmed the article anyway so to me it didn't matter lol.

oasisbob 19 hours ago | parent [-]

Those are sentence fragments, not perfect sentences. They're useful in some contexts, but are inappropriate for more formalized writing.

When LLMs reuse the same patterns dozens of times in a single article, the patterns stops being interesting or surprising and just become obnoxious and grating.

iamanllm 14 hours ago | parent [-]

it's not formal writing it's a blog post.