▲ | kla-s a day ago | ||||||||||||||||
Id also add that 5) We need some sense of truth. Im not quite sure if the current paradigm of LLMs are robust enough given the recent Anthropic Paper about the effect of data quality or rather the lack thereof, that a small bad sample can poison the well and that this doesn’t get better with more data. Especially in conjunction with 4) some sense of truth becomes crucial in my eyes (Question in my eyes is how does this work? Something verifiable and understandable like lean would be great but how does this work with more fuzzy topics…). | |||||||||||||||||
▲ | FloorEgg 13 hours ago | parent [-] | ||||||||||||||||
That's a segue into an important and rich philosophical space... What is truth? Can it be attained, or only approached? Can truth be approached (progress made towards truth) without interacting with reality? The only shared truth seeking algorithm I know is the scientific method, which breaks down truth into two categories (my words here): 1) truth about what happened (controlled documented experiments) And 2) truth about how reality works (predictive powers) In contrast to something like Karl friston free energy principle, which is more of a single unit truth seeking (more like predictive capability seeking) model. So it seems like truth isn't an input to AI so much as it's an output, and it can't be attained, only approached. But maybe you don't mean truth so much as a capability to definitively prove, in which case I agree and I think that's worth adding. Somehow integrating formal theorem proving algorithms into the architecture would probably be part of what enables AI to dramatically exceed human capabilities. | |||||||||||||||||
|