| ▲ | wongarsu 3 hours ago | |
It performs at a usable level across a wide range of tasks. I'm not sure about two years ago, but ten years ago we would have called it an AGI. As opposed to "regular AI" where you have to assemble a training set for your specific problem, then train an AI on it before you can get your answers. Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong | ||
| ▲ | sigbottle 2 hours ago | parent | next [-] | |
I'm pretty sure most people take issue with AGI, because we've been raised in culture to believe that AGI is a super entity who is a complete superset of humans and could never ever be wrong about anything. In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state. But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all. Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there. | ||
| ▲ | NoMoreNicksLeft an hour ago | parent | prev | next [-] | |
No one who read science fiction in 1955 would call any of the various models we know to be "artificial intelligence". They would be impressed with it, even excited at first that it was that... until they'd had a chance to evaluate it. Science fiction from that era even had the concept of what models are... they'd call it an "oracle". I can think of at least 3 short stories (though remembering the authors just isn't happening for me at the moment). The concept was of a device that could provide correct answers to any question. But these devices had no agency, were dependent on framing the question correctly, and limited in other ways besides (I think in one story, the device might chew on a question for years before providing an answer... mirroring that time around 9am PST when Claude has to keep retrying to send your prompt). We've always known what we meant by artificial intelligence, at least until a few years ago when we started pretending that we didn't. Perhaps the label was poorly chosen (all those decades ago) and could have a better label now (AGI isn't that better label, it's dumber still), but it's what we're stuck with. And we all know what we mean by it. We all almost certainly do not want that artificial intelligence because most of us are certain that it will spell the doom of our species. | ||
| ▲ | Der_Einzige 2 hours ago | parent | prev [-] | |
Just don't move the goal posts. AGI was already here the day ChatGPT came out: https://www.noemamag.com/artificial-general-intelligence-is-... | ||