Remix.run Logo
password54321 21 hours ago

People like Eliezer and Nick Bostrom are living proof that if you say enough and sound smart enough people will listen to you and think you have credibility.

Meanwhile you won't find anyone on here who is an author for Attention is All You Need. You know the thing that actually is the driving force behind LLMs.

gjm11 6 hours ago | parent [-]

The context is that rwaksmunski implied that people have been saying "AGI is 10 years away" for ages, and I was pointing out that the sort of people who say "AGI is X years away" have not in fact been setting X=10 until very recently.

I wasn't claiming that the people on that list are the smartest or best-informed people thinking about artificial intelligence.

But, FWIW, from about 13:20 in https://www.youtube.com/watch?v=_sbFi5gGdRA Ashish Vaswani (lead author on that paper) being asked what will happen in 3-5 years and if I'm understanding him right he thinks AI systems might be solving some of the Millennium Prize Problems in mathematics by then; from about 17:10 he's asked about how scientists will work ~5 years in the future and he says AI systems will be apprentices or collaborators; at any rate he's not not saying that human-level AI is likely to come in the near future. From about 1:12:40 in https://www.youtube.com/watch?v=v0gjI__RyCY Noam Shazeer (second author on that paper), in response to a question about "fast takeoff", says that he does expect a very rapid improvement in AI capabilities; he's not explicit about when he expects that to happen or how far he expects it to go, but my impression from the other bits of that discussion I watched is that he too is not not saying that AI systems won't be at or beyond human level in the near future. From about 49:00 in https://www.youtube.com/watch?v=v0beJQZQIGA he's asked: if hardware progress stopped, would we still get to AGI? and he says he thinks yes, which in particular suggests that he does think AGI is in the foreseeable future though it doesn't say much about when.

That's all fairly vague, but I very much don't get the impression that either of these people thinks that AI systems are just dumb stochastic parrots or that genuinely human-level AI systems are terribly far off.