▲ | PeterStuer 14 hours ago | |||||||
"Elephants don't play chess" ;) You have a tiny, completely known, deterministic rule based 'world'. 'Reasoning' forwards for that is trivial. Now try your approach for much more 'fuzzy', incomletely and ill defined environments, e.g. natural language production, and watch it go down in flames. Different problems need different solutions. While current frontier llm's show surprising results in emergent shallow and linguistic reasoning, they are far away from deep abstract logical reasoning. A sota theorem prover otoh, can excel at that, but can still struggle to produce a coherent sentence. I think most have always agreed that for certain tasks, an abstraction over which one can 'reason' is required. People differ in opinion over wether this faculty is to be 'crafted' in or wether it is possible to have it emerge implicitly and more robust from observations and interactions. | ||||||||
▲ | AnotherGoodName 5 hours ago | parent [-] | |||||||
What seems bizarre though is that the language problem was fully solved first (where fully solved means AI can learn it through pure observation with no human intervention at all). As in language today is learnt by basically throwing raw data at an LLM. Board games such as chess still require a human to manually build a world model for the state space search to work on. They are indeed totally different problems but it's still shocking to me which one was fully solved first. | ||||||||
|