▲ | levitate 3 days ago | |
"AGI needs to update beliefs when contradicted by new evidence" is a great idea, however, the article's approach of building better memory databases (basically fancier RAG) doesn't seem enable this. Beliefs and facts are built into LLMs at a very low layer during training. I wonder how they think they can force an LLM to pull from the memory bank instead of the training data. | ||
▲ | jibal 3 days ago | parent | next [-] | |
LLMs are not the proposed solution. (Also, LLMs don't have beliefs or other mental states. As for facts, it's trivially easy to get an LLM to say that it was previously wrong ... but multiple contradictory claims cannot all be facts.) | ||
▲ | mdp2021 3 days ago | parent | prev [-] | |
> how they think they can force an LLM to pull from the memory bank instead of the training data You have to implement procedurality first (e.g. counting, after proper instancing of ideas). |