| ▲ | zug_zug 19 hours ago | |||||||||||||
I'm sorry what even is this? Giving $10k rewards for significant advancements toward "AGI"? What does "making a framework" even mean, it feels like a nothing post. When I think of what real AGI would be I think: - Passes the turing test - Writes a New York Times Bestseller without revealing it was written by AI - Writes journal articles that pass peer review - Wins a Nobel Prize - Writes a successful comedy routine - Creates a new invention And no, nobody is going to make an automated kaggle benchmark to verify these. Which is fine, because an LLM will never be AGI. An LLM can't even learn mid-conversation. | ||||||||||||||
| ▲ | nomel 6 hours ago | parent | next [-] | |||||||||||||
Why does your definition of "AGI" have to exclude nearly all humans? Wouldn't it still be "AGI" if it was as smart as the average human? Since when did AGI stop representing the words that make term? Artificial (man made) General (not specific) Intelligence. Is a human not "GI"!? | ||||||||||||||
| ▲ | stingraycharles 19 hours ago | parent | prev | next [-] | |||||||||||||
I get the feeling that the original post was also written using LLMs, it doesn’t make a lot of sense. If an LLM like this is really intelligent, at the very least, I’d expect it to be able to invent. For example, train an LLM on a dataset only containing knowledge from before nuclear energy was invented, and see if it can invent nuclear energy. But that’s the problem: they’re not really training the model on intelligence, they’re training it on knowledge. So if you strip away the knowledge, you’re left with almost nothing. | ||||||||||||||
| ▲ | voxleone 18 hours ago | parent | prev | next [-] | |||||||||||||
>> An LLM can't even learn mid-conversation. There’s an implicit assumption that scaling text models alone gets us to human-like intelligence, but that seems unlikely without grounding in multiple sensory domains and a unified world model. What’s interesting is that if we do go down that route successfully, we may get systems with something like internal experience or agency. At that point, the ethical frame changes quite a bit. | ||||||||||||||
| ▲ | ixtli 19 hours ago | parent | prev | next [-] | |||||||||||||
They’re slowly redefining AGI so they can use it for more marketing. If you showed someone from 1960 our LLMs from and told them “this is AI” I think they’d be astounded but a little confused because “artificial intelligence” definitely carried a very clear meaning in literature and media. Now it is marketing terminology and we’re no closer to having a meaningful definition for the word intelligence. | ||||||||||||||
| ||||||||||||||
| ▲ | ahoka 19 hours ago | parent | prev | next [-] | |||||||||||||
I find it very interesting about the Turing test that as chatbots improve, so do humans get better at recognizing them. | ||||||||||||||
| ▲ | sourcegrift 19 hours ago | parent | prev [-] | |||||||||||||
Grok recently created a cancer vaccine for a dog that reduced tumor size by 75% | ||||||||||||||
| ||||||||||||||