Remix.run Logo
dingnuts 14 hours ago

honestly I think these things cause a form of Gell Mann's Amnesia where when you use them for something you know already, the errors are obvious, but when you use them for something you don't understand already, the output is sufficiently plausible that you can't tell you're being misled.

this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?

brandall10 12 hours ago | parent | next [-]

The great thing is the models are sufficiently different enough, that when multiple come to the same conclusion, there is a good chance that conclusion is bound by real data.

And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.

gitremote 12 hours ago | parent [-]

All the models are pre-trained on the same one Internet.

brandall10 11 hours ago | parent [-]

"Bound by real data" meaning not hallucinations, which is by far the bigger issue when it comes to "be an expert that does x" that doesn't have a real capability to say "I don't know".

The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.

So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.

rokkamokka 14 hours ago | parent | prev | next [-]

I'd frame it such that LLM advice is best when it's the type that can be quickly or easily confirmed. Like a pointer in the right (or wrong) direction. If it was false, then try again - quick iterations. Taking it at its "word" is the potentially harmful bit.

raw_anon_1111 12 hours ago | parent | prev | next [-]

Usually something as simple as saying “now give me a devils advocate resoonse” will help and of course “verify your answer on the internet” will give you real sources that you can verify.

I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.

Every web search on CP is inundated with slimy lawyers.

[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.

The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)

bamboozled 14 hours ago | parent | prev [-]

It’s funny you should say that because I have been using it in the way you describe. I kind of know it could be wrong, but I’m kind of desperate for info so I consult Claude anyway. After stressing hard I realize it was probably wrong, find someone who knows what they’re and actually on about and course correct.