▲ | adwn 6 days ago | |
I think the discrepancy between different views on the matter mainly stems from the fact that state-of-the-art LLMs are better (sometimes extremely better) at some tasks, and worse (sometimes extremely worse) at other tasks, compared to average humans. For example, they're better at retrieving information from huge amounts of unstructured data. But they're also terrible at learning: any "experience" which falls out of the context window is lost forever, and the model can't learn from its mistakes. To actually make it learn something requires very many examples and a lot of compute, whereas a human can permanently learn from a single example. | ||
▲ | andsoitis 6 days ago | parent [-] | |
> human can permanently learn from a single example This, to me at least, seems like an important ingredient to satisfying a practical definition / implementation of AGI. Another might be curiosity, and I think perhaps also agency. |