| ▲ | simianwords 2 hours ago | |||||||
If it bullshits so much, you wouldn't have a problem giving me an example of it bullshitting on ChatGPT (paid version)? Lets take any example of a text prompt fitting a few pages - it may be a question in science or math or any domain. Can you get it to bullshit? | ||||||||
| ▲ | dgb23 39 minutes ago | parent | next [-] | |||||||
To me it’s the other way around. It’s difficult to trust (paid) ChatGPT‘s output consistently. When I need exact, especially up to date facts, I have to constantly double check everything. I split my sessions into projects by topic, it regularly mixes things up in subtle and not so subtle ways. There is no sense of actually understanding continuity and especially not causality it seems. It’s _very_ easy to lead it astray and to confidently echo false assumptions. In any case, I‘ve become more precise at prompting and good at spotting when it fails. I think the trick is to not take its output too seriously. | ||||||||
| ▲ | beders an hour ago | parent | prev | next [-] | |||||||
I think you highlight one of the problems with users of LLMs: You can't tell anymore if it is BS or not. I caught Claude the other day hallucinating code that was not only wrong, but dangerously wrong, leading to tasks being failed and never recover. But it certainly wasn't obvious. | ||||||||
| ▲ | simoncion an hour ago | parent | prev [-] | |||||||
> If it bullshits so much, you wouldn't have a problem giving me an example of it bullshitting on ChatGPT (paid version)? There's an entire paragraph in the essay about apyhr's direct experience with ChatGPT failures and sustained bullshitting that we'd never expect from a moderately-skilled human who possesses at least two functioning braincells. That paragraph begins "I have recently argued for forty-five minutes with ChatGPT". Do notice that there are six sentences in the paragraph. I encourage you to read all of them (make sure to check out the footnote... it's pretty good). The exact text of the ChatGPT session is irrelevant; even if you reported that you were unable to reproduce the issue, it would only reinforce one of the underlying points -namely- that these systems are unreliable. aphyr has a pretty extensive body of published work that indicates that he'd not likely fabricate a story of an LLM repeatedly failing to accomplish a task that any moderately-skilled human could accomplish when equipped with the proper tools. So, I believe that his report is true and accurate. | ||||||||
| ||||||||