| ▲ | CamperBob2 12 hours ago | ||||||||||||||||
The Qwen 3.6 27B 8-bit quant has no problem with it. I'd guess that most thinking models won't fail this kind of test anymore, while some base or instruct models that are not post-trained for reasoning will still fail it. I also can't reproduce it in ChatGPT 5.3 Instant with auto-thinking disabled. Solved problem, as far as I'm concerned. Maybe this particular case was a bug in the voice model, or just some BS the YouTuber made up for clicks. (Notice that we never actually see the answer in text form.) Mission accomplished, I guess. | |||||||||||||||||
| ▲ | Imustaskforhelp 11 hours ago | parent [-] | ||||||||||||||||
For what its worth, I tried this myself in chatgpt before uploading it and it said to me that there are three e's which is what made me upload it as I just had to try it out so there's my anecdotal evidence which was the reason why I uploaded it in the first place. Actually let me replicate it, here you go: https://chatgpt.com/share/69f7a27a-2634-83e8-bffa-520bd2ad47... I am saying that these models are still incredibly finnicky, I can sometimes get the right answer too don't get me wrong but its just fundamentally unpredictable and seems more like guess work at times too just as how for the original video person, it said there are no e's first then said 4, then said 5 and for me it said 3 but sometimes it would say 4 too. So my point is saying that its a solved problem doesn't seem accurate to me if I am able to replicate it from my testing and the first time that I tried it in my chatgpt it also said 3. Edit: here is another chatgpt link seperate from the first one I shared where it says 3 again https://chatgpt.com/share/69f7a3a6-aa1c-83e8-b622-52cb2a9b10... And I tried another time too so here is yet another one https://chatgpt.com/share/69f7a3ee-07c8-83e8-ba43-65800d8907... Do note that All links are different even though they share the first 69f7a3ee but the whole links/chats are different) | |||||||||||||||||
| |||||||||||||||||