Remix.run Logo
0xbadcafebee 2 days ago

People complain a lot about LLM-written articles, but the human comments here on HN are far worse. Mostly a bunch of people extremely proud of themselves for not reading an LLM-written article, and then a bunch of people who take it at face value and make the model seem almost useful, and one comment that actually looked at other benchmarks. Good 'ol humanity, good at.. being emotional... and not doing analysis.....

The article makes some good points about model design (how different size models within a family can get similar results, how to filter out hallucination, math result reinforcement), so that's worth understanding. It's analyzing a paper, which only discussed 3 sizes of the same model family. But what the article doesn't say is, compared to other model families, Granite 4.1 8B sucks. The only benchmark it does well at compared to other models is non-hallucination and instruction following. Qwen 3.5 4B (among other models) easily outclass it on every other metric.

This article teaches a valuable lesson about reading articles in general. You can take useful information away from them (yes, despite being written by LLM). But you should also use critical thinking skills and be proactive to see if the article missed anything you might find relevant.

sureMan6 2 days ago | parent | next [-]

The pro LLM rant is weird, LLMs "hallucinate" in creating detailed elaborate lies, the frontier models still do this egregiously, an LLM written article by default has 0 value since every single line could be true or it could be a convincingly crafted lie, every line has to be fact checked

I'm using Gemini 3.1 pro to help me research my thesis, it still with search enabled and on pro mode, invents entire papers that don't exist, and lies about the contents of existing papers to relate them to the context or to appease me, if I submitted an LLM written article based on the results its given me 80% of the article would be lies

Commenting to complain that the article is LLM written is helpful too since some people aren't able to distinguish

0xbadcafebee 2 days ago | parent | next [-]

> an LLM written article by default has 0 value since every single line could be true or it could be a convincingly crafted lie, every line has to be fact checked

The exact same thing is true of Human speech. You have no idea if anything a human says is true until you fact check it. But you don't fact check everything every person says, do you?

So what do you do instead? You use heuristics. Simple - and quite flawed - subconscious rules to stop worrying about things. You find a person you like, and you classify them "trustworthy", and believe almost all of what they say, not considering if any of it might be false. But of course, humans are fallible, and many of them receive "poisoned" input, and even hallucinate (making up information). They then spread that false information around. Yes, even the people you trust.

And when you're faced with something untrue, said by someone you trust, you rationalize it. "Oh, they just made a mistake." And you completely ignore that the person you trust told you a falsehood. Life is hard enough without having to question if everything we hear is false. So we just accept falsehoods from some people, and not others.

LLMs are likely more factual and knowledgeable today than humans are, thanks to their constant improvements via reinforcement. They're going to keep getting better too. But they'll never be perfect. Rather than rejecting anything they produce, my suggestion would be to do what you do with humans: trust them a little, verify big things, let the little things go, accept that there will be errors, and move on with life.

WarmWash 2 days ago | parent | prev | next [-]

If you are asking an LLM to cite it's sources you are wasting your time and degrading the quality of the response. LLMs have no inherent mechanism for "knowledge source tracking", because that isn't at all how they work. We're trying to get there with agentic stacks, but it's still too new.

For sparse knowledge tasks, where you know that the model can't possibly have much training because even humans themselves don't have much knowledge there, use it as a brainstorming partner, not as a source. Or put relevant papers in it's context to help you eval those papers in relation to your work. But it's just going to hurt itself in confusion trying to tie fuzzy ideas to sparse sources embedded in pages upon pages of mildly related google search results.

kevin42 2 days ago | parent | prev | next [-]

If they can't distinguish LLM text, then why should they care?

Anti-AI people like to bring up hallucination as if everything AI generates is false.

I can write pages of text, with my own content, and then use AI to improve my writing and clarity. Then I review and edit. It might have some LLM markers in there, which I remove sometimes because it's distracting. But the final, AI assisted writing is easier to read and better organized. But all the ideas are mine. Hallucinations are not remotely a problem in this case.

Forgeties79 2 days ago | parent [-]

If you can’t distinguish between fake images and real ones why should you care?

kevin42 2 days ago | parent [-]

That depends on the purpose of the image.

If it's used to create a false narrative (like a deep fake), sure, you should care. But if it's used as an alternative to a stock photo, or as an easy way to make an infographic then no, I don't think you should care.

joquarky 2 days ago | parent | next [-]

> you should care

Why should I care? The world is full of false narratives.

How can I have the bandwidth to care about everything all of the time?

I swear that more than half of the complaining that I find here comes from priveledged people bike shedding over inane topics, and who have never had to really worry about serious survival-level (how am I going to eat today?) issues in their lives.

Forgeties79 2 days ago | parent | prev [-]

And when an LLM starts hallucinating, and I emphasize “when,” is that not the same issue as creating a false narrative?

halJordan 2 days ago | parent | prev [-]

No, you're being weird (why are you calling people weird anyway, not helpful).

You're complaining about facts that have been true since words have been written on paper. If you read the article with the same criticality you read any other article you wont have the problem you complain about.

The reality is, you're only complaining because you hate ai. Cool, but dont dress it up and resort to name calling to browbeat the other guy

lelanthran 2 days ago | parent [-]

If I read something and cannot tell that it is AI generated, then there's no problem.

If it has AI tells then I wont bother to continue reading because it was either written by an AI or it was written by someone who can't tell the difference.

Either way it's probably a poor piece of writing.

phkahler 2 days ago | parent | prev | next [-]

>> The only benchmark it does well at compared to other models is non-hallucination and instruction following.

I think instruction following is going to be the most useful thing these models do. Add a voice interface and access to a bunch of simple, straight-forward devices or APIs and you have a mildly useful assistant. If that can be done in 8B parameters it will soon run on edge devices. That's solid usefulness.

encrux 2 days ago | parent [-]

Anything that beats alexa-level intelligence on an edge-device is what I'd call useful as well, which shouldn't be too hard.

It's mind-boggling how bad current voice assistants sometimes are when you prompt them some fairly easy questions.

haolez 2 days ago | parent | prev | next [-]

The problem is the signal/noise ratio in these articles. If the AI has written the article, then this same info could have been generated by my own AI, but tailored to my needs. So what, exactly, is the new info that this article is generating that I can use to consult with my AI? That's what I want to get out of this interaction.

Maybe my point is something on the lines of "Just send me the prompt"[0]

[0] https://blog.gpkb.org/posts/just-send-me-the-prompt/

danielbln 2 days ago | parent [-]

prompt + all other bits of information the context has been seeded with before the output was created (documents, web searches, other sources) in which case it might be more efficient to just consume the final deliverable (yourself or via LLM).

haolez 18 hours ago | parent [-]

Fair point. We could classify AI generated articles in two categories:

1) articles generated with context data that's trivial to find (or even embedded into the model)

2) articles generated with context data that's hard to find or not publicly available

simonw 2 days ago | parent | prev | next [-]

"The article makes some good points about model design"

But how can I tell if those are good points or not?

I don't want to invest time in reading something if the presence of those "good points" depends on a roll of the dice.

steveharing1 2 days ago | parent [-]

even calling it roll of the dice is an assumption. Can you point anything you find as mistake?

lelanthran 2 days ago | parent | next [-]

You expect people to read every single excretion, which can be generated faster than I can read,just to find the rare gem that might exist?

The problem is that in the past it took multiple times more effort and hours to write something than it took to read. That served two purposes:

1. Lazy people just looking for an audience were effectively gatekept from drowning the world with their every vapid thought.

2. Because supply was many times slower than consumption it was viable to give most articles a chance: the author could not drown me in a deluge even if they wanted to.

Having the criteria now that the author should spend at least as much effort creating the piece as they expect the reader expend reading it is a damn useful bar: instead of reading 1000 AI articles just to find the one good one, I can simply read 10 human authored articles and be certain that 9 of them have something worthwhile.

simonw 2 days ago | parent | prev [-]

No, because I'm not going to spend a bunch of my time fact-checking obvious AI slop.

joquarky 2 days ago | parent [-]

Then don't complain.

simonw 2 days ago | parent [-]

?

lelanthran 2 days ago | parent | prev | next [-]

> people complain a lot about LLM-written articles, but the human comments here on HN are far worse.

No, they aren't.

You are comparing writing produced with little to no effort to writing produced with the minimal effort required to communicate.

It's reasonable for people to complain that they are presented material that not even the author thought was worth the effort.

geraneum 2 days ago | parent | prev | next [-]

> the human comments here on HN are far worse

I already assume some comments here are LLM written.

mkovach 2 days ago | parent | next [-]

I just wait until I'm hallucinating, then I comment. Keeps the classifiers honest.

elxr 2 days ago | parent | prev [-]

I mean, obviously.

I assume some people here have never programmed a single useful thing even once in their lives.

drob518 2 days ago | parent | prev | next [-]

> But what the article doesn't say is, compared to other model families, Granite 4.1 8B sucks.

Right. This just says that Granite 4.1 8B is better than a previous version, Granite 4.0-H-Small, which has 32B, 9B active.

So, they made a less bad model than before. But that doesn't tell you anything about how it compares with other models.

DetroitThrow 2 days ago | parent | prev | next [-]

>Mostly a bunch of people extremely proud of themselves for not reading an LLM-written article

I'm not sure it's proud as much as people voicing displeasure with the uncertainty about what went into the LLM prompt. This may have been a 1 sentence prompt, or it may have been some well researched background that simply reformatted it. Why waste minutes-hours on verifying it if it's possible someone could have spent 10 second on it? It's very easy to see their point.

People seem to indicate people they disagree with voicing their opinion about anything lately is some auto-fellatio, I wonder what causes them to think this way.

whalesalad 2 days ago | parent | prev | next [-]

The thing is it's just a bunch of other original content that has been chewed up and regurgitated into something "new". Just show us the original content instead. This is by definition, slop. https://huggingface.co/blog/ibm-granite/granite-4-1

2 days ago | parent | prev | next [-]
[deleted]
steveharing1 2 days ago | parent | prev [-]

[dead]