Remix.run Logo
simianwords 5 days ago

Disagree. You have to try really hard and go very niche and deep for it to get some fact wrong. In fact I'll ask you to provide examples: use GPT 5 with thinking and search disabled and get it to give you inaccurate facts for non niche, non deep topics.

Non niche meaning: something that is taught at undergraduate level and relatively popular.

Non deep meaning you aren't going so deep as to confuse even humans. Like solving an extremely hard integral.

Edit: probably a bad idea because this sort of "challenge" works only statistically not anecdotally. Still interesting to find out.

malfist 5 days ago | parent | next [-]

Maybe you should fact check your AI outputs more if you think it only hallucinates in niche topics

simianwords 5 days ago | parent [-]

The accuracy is high enough that I don't have to fact check too often.

platevoltage 4 days ago | parent | next [-]

I totally get that you meant this in a nuanced way, but at face value it sort of reads like...

Joe Rogan has high enough accuracy that I don't have to fact check too often. Newsmax has high enough accuracy that I don't have to fact check too often, etc.

If you accept the output as accurate, why would fact checking even cross your mind?

gspetr 4 days ago | parent | next [-]

Not a fan of that analogy.

There is no expectation (from a reasonable observer's POV) of a podcast host to be an expert at a very broad range of topics from science to business to art.

But there is one from LLMs, even just from the fact that AI companies diligently post various benchmarks including trivia on those topics.

simianwords 4 days ago | parent | prev [-]

Do you question everything your dad says?

platevoltage 4 days ago | parent [-]

If it's about classic American cars, no. Anything else, usually.

collingreen 5 days ago | parent | prev | next [-]

Without some exploratory fact checking how do you estimate how high the accuracy is and how often you should be fact checking to maintain a good understanding?

simianwords 4 days ago | parent [-]

I did initial tests so that I don't have to do it anymore.

jibal 4 days ago | parent | next [-]

Everyone else has done tests that indicate that you do.

glenstein 4 days ago | parent [-]

And this is why you can't use personal anecdotes to settle questions of software performance.

Comment sections are never good at being accountable for how vibes-driven they are when selecting which anecdotes to prefer.

malfist 4 days ago | parent | prev [-]

If there's one thing that's constant it's that these systems change.

mvdtnz 4 days ago | parent | prev [-]

If you're not fact checking it how could you possibly know that?

JustExAWS 5 days ago | parent | prev [-]

I literally just had ChatGPT create a Python program and it used .ends_with instead of .endswith.

This was with ChatGPT 5.

I mean it got a generic built in function of one of the most popular languages in the world wrong.

simianwords 5 days ago | parent [-]

"but using LLMs for answering factual questions" this was about fact checking. Of course I know LLM's are going to hallucinate in coding sometimes.

JustExAWS 5 days ago | parent [-]

So it isn’t a “fact” that the built in Python function that tests whether a string ends with a substring is “endswith”?

See

https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

If you know that a source isn’t to be believed in an area you know about, why would you trust that source in an area you don’t know about?

Another funny anecdote, ChatGPT just got the Gell-Man effect wrong.

https://chatgpt.com/share/68a0b7af-5e40-8010-b1e3-ee9ff3c8cb...

simianwords 5 days ago | parent | next [-]

It got it right with thinking which was the challenge I posed. https://chatgpt.com/share/68a0b897-f8dc-800b-8799-9be2a8ad54...

OnlineGladiator 4 days ago | parent [-]

The point you're missing is it's not always right. Cherry-picking examples doesn't really bolster your point.

Obviously it works for you (or at least you think it does), but I can confidently say it's fucking god-awful for me.

glenstein 4 days ago | parent | next [-]

>The point you're missing is it's not always right.

That was never their argument. And it's not cherry picking to make an argument that there's a definable of examples where it returns broadly consistent and accurate information that they invite anyone to test.

They're making a legitimate point and you're strawmanning it and randomly pointing to your own personal anecdotes, and I don't think you're paying attention to the qualifications they're making about what it's useful for.

simianwords 4 days ago | parent | prev [-]

Am I really the one cherry picking? Please read the thread.

OnlineGladiator 4 days ago | parent [-]

Yes. If someone gives an example of it not working, and you reply "but that example worked for me" then you're cherry picking when it works. Just because it worked for you does not mean it works for other people.

If I ask ChatGPT a question and it gives me a wrong answer, ChatGPT is the fucking problem.

simianwords 4 days ago | parent [-]

The poster didn't use "thinking" model. That was my original challenge!!

Why don't you try the original prompt using thinking model and see if I'm cherry picking?

OnlineGladiator 4 days ago | parent [-]

Every time I use ChatGPT I become incredibly frustrated with how fucking awful it is. I've used it more than enough, time and time again (just try the new model, bro!), to know that I fucking hate it.

If it works for you, cool. I think it's dogshit.

simianwords 4 days ago | parent | next [-]

Share your examples so that it can be useful to everyone

glenstein 4 days ago | parent | prev | next [-]

They just spent like six comments imploring you to understand that they were making a specific point: generally reliable on non-niche topics using thinking mode. And that nuance bounced off of you every single time as you keep repeating it's not perfect, dismiss those qualifications as cherry picking and repeat personal anecdotes.

I'm sorry but this is a lazy and unresponsive string of comments that's degrading the discussion.

OnlineGladiator 4 days ago | parent [-]

The neat thing about HN is we can all talk about stupid shit and disagree about what matters. People keep upvoting me, so I guess my thoughts aren't unpopular and people think it's adding to the discussion.

I agree this is a stupid comment thread, we just disagree about why.

glenstein 3 days ago | parent [-]

Again, they were making a specific argument with specific qualifications and you weren't addressing their point as stated. And your objections such as they are would be accounted for if you were reading carefully. You seem more to be completely missing the point than expressing a disagreement so I don't agree with your premise.

ninetyninenine 4 days ago | parent | prev | next [-]

Objectively he didn't cherry pick. He responded to the person and it got it right when he used the "thinking" model WHICH he did specify in his original comment. Why don't you stick to the topic rather than just declaring it's utter dog shit. Nobody cares about your "opinion" and everyone is trying to converge on a general ground truth no matter how fuzzy it is.

OnlineGladiator 4 days ago | parent [-]

All anybody is doing here is sharing their opinion unless you're quoting benchmarks. My opinion is just as useless as yours, it's just some find mine more interesting and some find yours more interesting.

How do you expect to find a ground truth from a non-deterministic system using anecdata?

glenstein 3 days ago | parent [-]

This isn't a people having different opinions thing, this is you overlooking specific caveats and talking past comments that you're not understanding. They weren't cherry picking, and they made specific qualifications about the circumstances where it behaves as expected, and your replies keep losing track of those details.

OnlineGladiator 2 days ago | parent [-]

And I think you're completely missing the point. And you say this comment thread is a waste and yet you keep replying. What exactly are you trying to accomplish here? Do you think repeating yourself for a fifth time is going to achieve something?

glenstein 2 days ago | parent [-]

The difference is I can name specific things that you are in fact demonstrably ignoring, and already did name them. You're saying you just have a different opinion, in an attempt to mirror the form of my criticism, but you can't articulate a comparable distinction and you're not engaging with the distinction I'm putting forward.

OnlineGladiator 2 days ago | parent [-]

So your goal here is to say the same thing over and over again and hope I eventually give the affirmation you so desperately need? You've already declared that you're right multiple times. Nobody cares but you.

https://xkcd.com/386/

You might want to develop a sense of humor. You'll enjoy life more.

glenstein a day ago | parent [-]

My goal is to invite you to think critically about the specific caveats in the comment you are replying to instead of ignoring those caveats. They said that generally speaking using thinking mode on non niche topics they can get reliable answers, and invited anyone who disagreed with it to offer examples where it fails to perform as expected, a constructive structure for counter examples in case anyone disagreed.

You basically ignored all of those specifics, and spuriously accused them of cherry picking when they weren't, and now you don't want to take responsibility for your own words and are using this conversation as a workshopping session for character attacks in hopes that you can make the conversation about something else.

OnlineGladiator a day ago | parent [-]

As I've said many times before, I am aware of everything you have said. I just don't care. You seem to be really upset that someone on the internet disagrees with you. And from my perspective, you are the one that has no self-awareness and is completely missing the point. You don't even understand the conversation we're having and yet you're constantly condescending.

I'm sure if you keep repeating yourself though I'll change my mind.

glenstein 9 hours ago | parent [-]

Simianwords said: "use GPT 5 with thinking and search disabled and get it to give you inaccurate facts for non niche, non deep topics" and noted that mistakes were possible, but rare.

JustExAWS replied with an example of getting Python code wrong and suggested it was a counter example. Simianwords correctly noted that their comment originally said thinking mode for factual answers on non-niche topics and posted a link that got the python answer right with thinking enabled.

That's when you entered, suggesting that Simian was "missing" the point that GPT (not distinguishing thinking or regular mode), was "not always right". But they had already acknowledged multiple times that it was not always right. They said the accuracy was "high enough", noted that LLMs get coding wrong, and reiterating that their challenge was specifically about thinking mode.

You, again without acknowledging the criteria they had noted previously, insisted this was cherry picking, missing the point that they were actually being consistent from the beginning, inviting anyone to give an example showing otherwise. At no point between then and here have you demonstrated an awareness of this criteria despite your protestations to the contrary.

Instead of paying attention to any of the details you're insulting me and retreating into irritated resentment.

OnlineGladiator 8 hours ago | parent [-]

Thank you for repeating yourself again. It's really hammering home the point. Please, continue.

4 days ago | parent | prev [-]
[deleted]
cdrini 4 days ago | parent | prev [-]

I sometimes feel like we throw around the word fact too often. If I misspell a wrd, does that mean I have committed a factual inaccuracy? Since the wrd is explicitly spelled a certain way in the dictionary?