▲ | rwaksmunski 2 days ago | ||||||||||||||||
AGI is still a decade away, and always will be. | |||||||||||||||||
▲ | gjm11 a day ago | parent [-] | ||||||||||||||||
You say that as if people had been saying "10 years away" for ages, but I don't think that's true at all. There's some information about historical predictions at https://www.openphilanthropy.org/research/what-should-we-lea... (written in 2016) from which (I am including the spreadsheet found at footnote 27) these are some I-hope-representative data points, with predictions from actual AI researchers, popularizers, pundits, and SF authors: 1960: Herbert Simon predicts machines can do all (intellectual) work humans can "within 20 years". 1961: Marvin Minsky says "within our lifetimes, machines may surpass us"; he was 33 at the time, suggesting a not-very-confident timescale of say 40 years. 1962: I J Good predicts something at or above human level circa 1978. 1963: John McCarthy allegedly hopes for "a fully-intelligent machine" within a decade. 1970: I J Good predicts 1994 +- 10 years. 1972: a survey of 67 computer scientists found 27% saying <= 20 years, 32% saying 20-50 years, and 42% saying > 50 years. 1977-8: McCarthy says things like "4 to 400 years" and "5 to 500 years". 1988: Hans Moravec predicts human-level intelligence in 40 years. 1993: Vernor Vinge predicts better-than-human intelligence in the range 2005..2030. 1999: Eliezer Yudkowsky predicts intelligence explosion circa 2020. 2001: Ben Goertzel predicts "during the next 100 years or so". 2001: Arthur C Clarke predicts human-level intelligence circa 2020. 2006: Douglas Hofstadter predicts somewhere around 2100. 2006: Ray Solomonoff predicts within 20 years. 2008: Nick Bostrom says <50% chance by 2033. 2008: Rodney Brooks says no human-level AI by 2030. 2009: Shane Legg says probably between 2018 and 2036. 2011: Rich Sutton estimates somewhere around 2030. Of these, exactly one suggests a timescale of 10 years; the same person a little while later expresses huge uncertainty ("4 to 400 years"). The others are predicting timescales of multiple decades, also generally with low confidence. Some of those predictions are now known to have been too early. There definitely seems to be a sort of tendency to say things like "about 30 years" for exciting technologies many of whose key details remain un-worked-out: AI, fusion power, quantum computing, etc. But it's definitely not the case that "a decade away" has been a mainstream prediction for a long time. People are in fact adjusting their expectations on the basis of the progress they observe in recent years. For most of the time since the idea of AI started being taken seriously, "10 years from now" was an exceptionally optimistic[1] prediction; hardly anyone thought it would be that soon. Now, at least if you listen to AI researchers rather than people pontificating on social media, "10 years from now" is a typical prediction; in fact my impression is that most people who spend time thinking about these things[2] expect genuinely-human-level AI systems sooner than that, though they typically have rather wide confidence intervals. [1] "Optimistic" in the narrow sense in which expecting more progress is by definition "optimistic". There are many many ways in which human-level, or better-than-human-level, AI could in fact be a very bad thing, and some of them are worse if it happens sooner, so "optimistic" predictions aren't necessarily optimistic in the usual sense. [2] Most, not all, of course. | |||||||||||||||||
|