Remix.run Logo
UltraSane 6 days ago

"It sounds like you're mostly just talking to yourself"

No, Claude does know a LOT more than I do about most things and does push back on a lot of things. Sometimes I am able to improve my reasoning and other times I realize I was wrong.

Trust me, I am aware of the linear algebra behind the curtain! But even when you mostly understand how they work the best LLMs today are very impressive. And latent spaces fundamentally new way to index data.

furyofantares 6 days ago | parent | next [-]

You can talk to yourself while reading books and searching the web for information. I don't think the fact that you're learning from information the LLM is pulling in means you're really conversing with it.

I do find LLMs very useful and am extremely impressed by them, I'm not saying you can't learn things this way at all.

But there's nobody else on the line with you. And while they will emit text which contradicts what you say if it's wrong enough, they've been heavily trained to match where you're steering things, even if you're trying to avoid doing any steering.

You can mostly understand how these work and still end up in a feedback loop that you don't realize is a feedback loop. I think this might even be more likely the more the thing has to offer you in terms of learning - the less qualified you are on the subject, the less you can tell when it's subtly yes-and'ing you.

elliotto 6 days ago | parent [-]

I think the nature of a conversational interface that responds to natural language questions is fundamentally different to the idea that you talk to yourself while reading information sources. I'm not sure it's useful to dismiss the idea that we can talk with a machine.

The current generation of LLMs have had their controversies, but these are still pre alpha products, and I suspect in the future we will look back on releasing them unleashed as a mistake. There's no reason the mistakes they make today can't be improved upon.

If your experiences with learning from a machine are similar to mine, then we can both see a whole new world coming that's going to take advantage of this interface.

ceejayoz 6 days ago | parent | prev [-]

> No, Claude does know a LOT more than I do about most things…

Plenty of people can confidently act like they know a lot without really having that knowledge.

UltraSane 6 days ago | parent [-]

So you are denying that LLMs actually contain real knowledge?

habinero 6 days ago | parent [-]

They contain training data and a statistical model that might generate something true or it might generate garbage, both with equal confidence. You need to already know the answer to determine which is which.

UltraSane 6 days ago | parent [-]

Have you actually used Claude Opus 4.1? It is right far more than it is wrong.

habinero 5 days ago | parent [-]

How could you know?

UltraSane 5 days ago | parent | next [-]

How do you react to comments like this?

https://news.ycombinator.com/item?id=44980896#44980913

I believe it absolutely should be, and it can even be applied to rare disease diagnosis.

My child was just saved by AI. He suffered from persistent seizures, and after visiting three hospitals, none were able to provide an accurate diagnosis. Only when I uploaded all of his medical records to an AI system did it immediately suggest a high suspicion of MOGAD-FLAMES — a condition with an epidemiology of roughly one in ten million.

Subsequent testing confirmed the diagnosis, and with the right treatment, my child recovered rapidly.

For rare diseases, it is impossible to expect every physician to master all the details. But AI excels at this. I believe this may even be the first domain where both doctors and AI can jointly agree that deployment is ready to begin.

UltraSane 5 days ago | parent | prev [-]

How can you?

habinero 5 days ago | parent [-]

Because I don't rely on a glorified text generator for what's true lol