| ▲ | camgunz 3 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||
I could quibble with some things, but this is right. I don't have a paid account so I can't ping away at 5.4 or whatever, but, I do have access to frontier models at work, and they hallucinate regularly. Dunno what to do if you don't believe this; good luck I guess. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | simianwords 2 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
I agree that they hallucinate sometimes. I agree they bullshit sometimes. But the extent is way overblown. They basically don't bullshit ever under the constraints of 1. 2-3 pages of text context 2. GPT-5.4 thinking I don't think the spirit of the original article (not your comments to be fair) captured this, hence the challenge. I believe we are on the same page here. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||