Remix.run Logo
bilekas 3 hours ago

> Sorry, do people not immediately see that this is an AI bit comment?

How do you know that ? Genuine question.

Maxion 3 hours ago | parent | next [-]

To be fair, it is blindingly obvious from the tells. OP also confirms it here: https://news.ycombinator.com/item?id=47045459#47045699

f311a 3 hours ago | parent | prev | next [-]

> isn't just "high demand", but "contractual lock-out."

The "isn't just .., but .." construction is so overused by LLMs.

lsp 3 hours ago | parent | prev | next [-]

The phrasing. "It's not just X, it's Y," overuse of "quotes"

dspillett 2 hours ago | parent [-]

The problem with any of these tells is that an individual instance is often taken as proof on its own rather than an indicator. People do often use “it isn't X, it is Y” like constructs¹ and many, myself included sometimes, overuse “quotes”², or use m-dashes³, or are overly concerned about avoiding repeating words⁶, and so forth.

LLMs do these things because they are in the training data, which means that people do these things too.

It is sometimes difficult to not sound like an LLM-written or LLM-reworded comment… I've been called a bot a few times despite never using LLMs for writing English⁴.

--------

[1] particularly vapid space-filler articles/comments or those using whataboutism style redirection, which might be a significant chunk of model training data because of how many of them are out there.

[2] I overuse footnotes as well, which is apparently a smell in the output of some generative tools.

[3] A lot of pre-LLM style-checking tools would recommend this in place of hyphens, and some automated reformatters would make the change without access, so there are going to be many examples in training data.

[4] I think there is one at work in VS which I use in DayJob, when it is suggesting code completion options to save typing (literally Glorified Predictive Text) and I sometimes accept its suggestion, and some of the tools I use to check my Spanish⁵ may be LLM based, so I can't claim that I don't use them at all.

[5] I'm just learning, so automatic translators are useful to check what I'm written isn't gibberish. For anyone else doing the same: make sure you research any suggested changes preferably using pre-2023 sources, because the output of these tools can be quite wrong as you can see when translating into a language you are fluent in.

[6] Another common “LLM tell” because they often have weighting functions especially designed to avoid token repetition, largely to avoid getting stuck in loops, but many pre-LLM grammar checking tools will pick people up on repeated word use too, and people tend to fix the direct symptom with a thesaurus rather than improving the sentence structure overall.

hakanderyal 3 hours ago | parent | prev [-]

It has Claude all over it. When you spend enough time with them it becomes obvious.

In this case “it’s not x, it’s y” pattern and its placement is a dead giveaway.

bayindirh 3 hours ago | parent | next [-]

Isn't this ironic to use AI to formulate a comment against AI vendors and hyperscalers.

It's not ironic, but bitterly funny, if you ask me.

Note: I'm not an AI, I'm an actual human without a Claude account.

phatfish 2 hours ago | parent | next [-]

I wonder what the ratio is of "constructive" use of AI is, verses people writing pointless internet comments.

It seems personal computing is being screwed so people can create memes, ask questions that take 30 seconds to find the answer to with Google or Wikipedia, and sound clever on social media?

bayindirh 2 hours ago | parent [-]

If you think AI as the whole discipline, there are very useful applications indeed, generally in pattern recognition and regulation space. I'm aware a lot of small projects which rely on AI to monitor ecosystems, systems or used as nice regulatory mechanisms. Also, same systems can be used for genuine security applications (civilian, non-lethal, legal and ethical).

If we are talking generative AI, again from my experience, things get a bit blurry. You can use smaller models to dig data you own.

I personally used LLMs, twice up to this day. In each case it was after very long research sessions without any answers. In one, it gave me exactly one reference, and I followed that reference and learnt what I was looking for. In the second case, it gave me a couple of pointers, which I'm going to follow myself again.

So, generative AI is not that useful for me, uses way too much resources, and industry leading models are well, unethical to begin with.

nubg 2 hours ago | parent | prev [-]

Yes I found this ironic as well lmao.

I do agree with the sentiment of the AI comment, and was even weighting just letting it slide because I do fear the future tht comment was warning against.

A_D_E_P_T 3 hours ago | parent | prev [-]

> “it’s not x, it’s y”

ChatGPT does this just as much, maybe even more, across every model they've ever released to the public.

How did both Claude and GPT end up with such a similar stylistic quirk?

I'd add that Kimi does it sometimes, but much less frequently. (Kimi, in general, is a better writer with a more neutral voice.) I don't have enough experience with Gemini or Deepseek to say.