| ▲ | simianwords 3 hours ago | |||||||
Haha ok. So still no example? The GPT shared link shows a "thought for" which indicates using the latest thinking model. You may try that. What you can do is this: submit a prompt that clearly makes GPT hallucinate. You may secretly use a worse model. You may use a system prompt that deliberately gives wrong answers. But I'm going to assume you won't go that far. We can leave it to the public to decide whether this is a legitimate counter example or not and whether it can really be reproduced. Shall we try that? I'm guessing you won't but worth a shot! | ||||||||
| ▲ | simoncion 2 hours ago | parent [-] | |||||||
You weren't paying much attention to the "Consider:" part of my previous comment. You don't believe that a well-paid, very careful, high-integrity member of the computer safety community has -on multiple occasions- encountered actual, sustained bullshiting from the latest-available for-pay version of ChatGPT. You don't accept either this fellow's reports or my informed assessment of his computing situation as truthful and accurate. On top of that, your goalpost-shifting and general demeanor throughout this conversation simply don't give me the impression that you've much integrity. I'm not spending the equivalent of ten-to-twenty six-packs to reproduce aphyr's work and -given the evidence I have before me- have you reject that, as well. 200 USD is a lot of money to throw away to "win" an Internet argument with a stranger who refuses to accept evidence presented by someone known to be careful, scrupulous, and honest. | ||||||||
| ||||||||