▲ | timschmidt 18 hours ago | |||||||||||||||||||||||||
None of your examples refute the direct evidence of internal world model building which has been demonstrated (for example: https://adamkarvonen.github.io/machine_learning/2024/01/03/c... ). Instead you have retreated to qualia like "well" and "sucks hard". > hallucinating Literally every human memory. They may seem tangible to you, but they're all in your head. The result of neurons behaving in ways which have directly inspired ML algorithms for nearly a century. Further, history is rife with examples of humans learning from books and other written words. And also of humans thinking themselves special and unique in ways we are not. > When using Claude Code or codex to write Swift code, I need to be very careful to provide all the APIs that are relevant in context (or let it web search), or garbage will be the result. Yep. And humans often need to reference the documentation to get details right as well. | ||||||||||||||||||||||||||
▲ | manmal 15 hours ago | parent [-] | |||||||||||||||||||||||||
Unfortunately we can’t know at this point whether transformers really understand chess, or just go on a textual representation of good moves in their training data. They are pretty good players, but far from the quality of specialized chess bots. Can you please explain how we can discern that GPT-2 in this instance really built a model of the board? Regarding qualia, that’s ok on HN. Regarding humans - yes, humans also hallucinate. Sounds a bit like whataboutism in this context though. | ||||||||||||||||||||||||||
|