| ▲ | autoexec 2 hours ago | ||||||||||||||||
> The strawberry thing has been solved It seems like every couple weeks there's some embarrassing failure of AI that gets quickly patched, but just because AI companies scramble to hide the failures of their technology doesn't mean they haven't failed in ways that shouldn't have been possible if they were what they claim them to be. > I don't think there exists such a question that can fool an LLM but not fool a reasonably smart person. An example was on the front page here just a few days ago. https://s3.eu-central-2.wasabisys.com/mastodonworld/media_at... Until someone invents an LLM that has any actual understanding of the words it outputs (which doesn't seem likely to happen in my lifetime) these things are going to keep happening, just like how it's impossible to get them to stop hallucinating. The limitation is intrinsic to what they are. We call these chatbots AI, but there is no intelligence there that didn't come from the humans whose words were used to train them. | |||||||||||||||||
| ▲ | simianwords 2 hours ago | parent [-] | ||||||||||||||||
Did you even care to run it in a reasoning model? It returns the correct result. Every few weeks I see the same thing. Come up with an example that trips up ChatGPT. | |||||||||||||||||
| |||||||||||||||||