Remix.run Logo
rwaksmunski 2 days ago

AGI is still a decade away, and always will be.

gjm11 a day ago | parent [-]

You say that as if people had been saying "10 years away" for ages, but I don't think that's true at all.

There's some information about historical predictions at https://www.openphilanthropy.org/research/what-should-we-lea... (written in 2016) from which (I am including the spreadsheet found at footnote 27) these are some I-hope-representative data points, with predictions from actual AI researchers, popularizers, pundits, and SF authors:

1960: Herbert Simon predicts machines can do all (intellectual) work humans can "within 20 years".

1961: Marvin Minsky says "within our lifetimes, machines may surpass us"; he was 33 at the time, suggesting a not-very-confident timescale of say 40 years.

1962: I J Good predicts something at or above human level circa 1978.

1963: John McCarthy allegedly hopes for "a fully-intelligent machine" within a decade.

1970: I J Good predicts 1994 +- 10 years.

1972: a survey of 67 computer scientists found 27% saying <= 20 years, 32% saying 20-50 years, and 42% saying > 50 years.

1977-8: McCarthy says things like "4 to 400 years" and "5 to 500 years".

1988: Hans Moravec predicts human-level intelligence in 40 years.

1993: Vernor Vinge predicts better-than-human intelligence in the range 2005..2030.

1999: Eliezer Yudkowsky predicts intelligence explosion circa 2020.

2001: Ben Goertzel predicts "during the next 100 years or so".

2001: Arthur C Clarke predicts human-level intelligence circa 2020.

2006: Douglas Hofstadter predicts somewhere around 2100.

2006: Ray Solomonoff predicts within 20 years.

2008: Nick Bostrom says <50% chance by 2033.

2008: Rodney Brooks says no human-level AI by 2030.

2009: Shane Legg says probably between 2018 and 2036.

2011: Rich Sutton estimates somewhere around 2030.

Of these, exactly one suggests a timescale of 10 years; the same person a little while later expresses huge uncertainty ("4 to 400 years"). The others are predicting timescales of multiple decades, also generally with low confidence.

Some of those predictions are now known to have been too early. There definitely seems to be a sort of tendency to say things like "about 30 years" for exciting technologies many of whose key details remain un-worked-out: AI, fusion power, quantum computing, etc. But it's definitely not the case that "a decade away" has been a mainstream prediction for a long time. People are in fact adjusting their expectations on the basis of the progress they observe in recent years. For most of the time since the idea of AI started being taken seriously, "10 years from now" was an exceptionally optimistic[1] prediction; hardly anyone thought it would be that soon. Now, at least if you listen to AI researchers rather than people pontificating on social media, "10 years from now" is a typical prediction; in fact my impression is that most people who spend time thinking about these things[2] expect genuinely-human-level AI systems sooner than that, though they typically have rather wide confidence intervals.

[1] "Optimistic" in the narrow sense in which expecting more progress is by definition "optimistic". There are many many ways in which human-level, or better-than-human-level, AI could in fact be a very bad thing, and some of them are worse if it happens sooner, so "optimistic" predictions aren't necessarily optimistic in the usual sense.

[2] Most, not all, of course.

password54321 21 hours ago | parent [-]

People like Eliezer and Nick Bostrom are living proof that if you say enough and sound smart enough people will listen to you and think you have credibility.

Meanwhile you won't find anyone on here who is an author for Attention is All You Need. You know the thing that actually is the driving force behind LLMs.

gjm11 6 hours ago | parent [-]

The context is that rwaksmunski implied that people have been saying "AGI is 10 years away" for ages, and I was pointing out that the sort of people who say "AGI is X years away" have not in fact been setting X=10 until very recently.

I wasn't claiming that the people on that list are the smartest or best-informed people thinking about artificial intelligence.

But, FWIW, from about 13:20 in https://www.youtube.com/watch?v=_sbFi5gGdRA Ashish Vaswani (lead author on that paper) being asked what will happen in 3-5 years and if I'm understanding him right he thinks AI systems might be solving some of the Millennium Prize Problems in mathematics by then; from about 17:10 he's asked about how scientists will work ~5 years in the future and he says AI systems will be apprentices or collaborators; at any rate he's not not saying that human-level AI is likely to come in the near future. From about 1:12:40 in https://www.youtube.com/watch?v=v0gjI__RyCY Noam Shazeer (second author on that paper), in response to a question about "fast takeoff", says that he does expect a very rapid improvement in AI capabilities; he's not explicit about when he expects that to happen or how far he expects it to go, but my impression from the other bits of that discussion I watched is that he too is not not saying that AI systems won't be at or beyond human level in the near future. From about 49:00 in https://www.youtube.com/watch?v=v0beJQZQIGA he's asked: if hardware progress stopped, would we still get to AGI? and he says he thinks yes, which in particular suggests that he does think AGI is in the foreseeable future though it doesn't say much about when.

That's all fairly vague, but I very much don't get the impression that either of these people thinks that AI systems are just dumb stochastic parrots or that genuinely human-level AI systems are terribly far off.