▲ | afiori 8 hours ago | |
Most of the reasonings for the impossibility of intelligence in LLMs either require very restricted environments (chatgpt might not be able to tell how many r are in strawberry, but it can write a python script to do so and it could call it if given access to shell or similar and it can understand the answer) or implicitly imply that human brains have magic powers beyond turing completeness. |