| ▲ | squidsoup 9 hours ago | |
You can imagine all you want, but my understanding is there is no credible evidence that scaling LLMs will result in true AGI. | ||
| ▲ | mhb 8 hours ago | parent [-] | |
Obviously there's no "evidence". Why would you even think we need AGI? But I'm happy to hear your reasoning if you were one of the few/only? people who imagined that software that could predict the next word could do what it now is doing. | ||