| ▲ | tombert 4 hours ago | |
> What does AGI look like in your opinion? Being able to actually reason about things without exabytes of training data would be one thing. Hell, even with exabytes of training data, doing actual reasoning for novel things that aren't just regurgitating things from Github would be cool. Being able to learn new things would be another. LLMs don't learn; they're a pretrained model (it's in the name of GPT), that send in inputs and get an output. RAGs are cool but they're not really "learning", they're just eating a bit more context in order to kind of give a facsimile of learning. Going to the extreme of what you're saying, then `grep` would be "darn close to AGI". If I couldn't grep through logs, it might have taken me years to go through and find my errors or understand a problem. I think that they're ultimately very neat, but ultimately pretty straightforward input-output functions. | ||
| ▲ | adamsb6 4 hours ago | parent [-] | |
Why should implementation matter at all? You should be able to classify a black box as AGI or not. Well, I guess you lose artificial if there’s a human brain hidden in the box. | ||