▲ | Sam Altman now says AGI, or human-level AI, is 'not a super useful term'(cnbc.com) | |||||||||||||||||||||||||||||||
28 points by EvgeniyZh 3 days ago | 22 comments | ||||||||||||||||||||||||||||||||
▲ | baxtr 3 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
He is smart. He senses that the tide is turning. So he starts changing the messaging. AGI was always just a vehicle to collect more money. AI people will have to find a new way now. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | thrown-0825 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
going with the musk pattern or over commit, under deliver, then gas light the rubes. | ||||||||||||||||||||||||||||||||
▲ | a_bonobo 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
This feels like Motte-and-bailey >The motte-and-bailey fallacy (named after the motte-and-bailey castle) is a form of argument and an informal fallacy where an arguer conflates two positions that share similarities: one modest and easy to defend (the "motte") and one much more controversial and harder to defend (the "bailey") The bailey: 'we can build AGI, give us millions of dollars'. The motte: '“I think the point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things' | ||||||||||||||||||||||||||||||||
▲ | d4rkn0d3z 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
It is remarkable how often this happens. We have a collection of separate but related technologies leading to the conception of a more general technology that does it all. We then proceed to build a towering inferno of complexity that is no doubt more general but less useful in specific instances. At this point, we conclude that what is needed are specialized tools for the separate use cases, so we promptly break up the general technology into many parts. Lather rinse repeat. It's like living in an Escher painting. | ||||||||||||||||||||||||||||||||
▲ | Overpower0416 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Like always, people like him only say the things that help them reach their current goal. It doesn’t matter if there is any truth to what they say. Moving goalposts, hyperbolic rhetoric, manipulative marketing to reach a large audience on an emotion level is the name of the game Elon Musk made this way of doing business popular and now every hotshot tech CEO does it…But I guess it works, so people will continue doing it since there are no repercussions. | ||||||||||||||||||||||||||||||||
▲ | jokoon 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
I wish they could use a fraction of that money to give an interesting definition of intelligence, or fund research in neurology, cognition or psychology that could give insights to define intelligence. I wonder how they test their product, but I bet they don't use scientists of other fields, like psychology or neuroscience? | ||||||||||||||||||||||||||||||||
▲ | iamleppert 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
It's also not a super profitable term, either. I'm already running Qwen3 coder on my laptop locally and I don't need any AI service. Just like that, the financial ambitions of AI have been snuffed out. | ||||||||||||||||||||||||||||||||
▲ | Pearledlang 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | RA_Fisher 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Doesn’t the term have contractual obligations for OpenAI and Microsoft? | ||||||||||||||||||||||||||||||||
▲ | itsalotoffun 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
I am shocked, shocked to hear that Sam is backpedalling this. | ||||||||||||||||||||||||||||||||
▲ | wolvesechoes 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Time for a different marketing strategy | ||||||||||||||||||||||||||||||||
▲ | nsonha 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Any tech person could have made that statement from the beginning, only the clueless tech reporters/VCs buy in on that only to now feel betrayed. Cannot sympathize with them, sorry. | ||||||||||||||||||||||||||||||||
▲ | aaron695 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
[dead] | ||||||||||||||||||||||||||||||||
▲ | ildon 3 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
I’m a bit surprised by some of the comments I’m reading, which tend to frame Altman’s words as nothing more than corporate self-interest. Of course, it’s true that in his position he has to speak in ways that align with his company’s goals. That’s perfectly natural, and in fact it would be odd if he didn’t. But that doesn’t mean there’s no truth in what he says. A company like his doesn’t choose its direction on a whim, these decisions are the product of intense internal debate, strategic analysis, and careful weighing of trade-offs. If there’s a shift in course, it’s unlikely to be just a passing fancy or a PR move detached from reality. Personally, I’ve always thought that the pursuit of AGI as the goal was misguided. Human intelligence is extraordinary, but it is constrained by the physical and biological limitations of the "host machine" (not just the human brain). These are limits we cannot change. Artificial intelligence, on the other hand, has no such inherent ceiling. It can develop far beyond the capabilities of our own minds, and perhaps that’s where our focus should be. | ||||||||||||||||||||||||||||||||
|