Remix.run Logo
lisper 6 days ago

I'm sympathetic to this argument, but nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible. I myself left my career as an AI researcher about 20 years ago because I was convinced the field was moribund and there would be no major breakthroughs in my lifetime. That was about as well-informed a prediction as you could hope to find at the time and it was obviously very wrong. It is in the nature of breakthroughs that they are rare and unpredictable. Nothing you say is wrong. I would bet against QC is 5 years (and even then I would not stake my life savings) but not 75.

lqstuart 6 days ago | parent | next [-]

In fairness, the biggest breakthrough in AI has been calling more and more things “AI.” Before LLMs it was content based collaborative filtering.

lisper 6 days ago | parent [-]

No, LLMs are a real breakthrough even if they are not by themselves reliable enough to produce a commercially viable application. Before LLMs, no one knew how to even convincingly fake a natural language interaction. I see LLMs as analogous to Rodney Brooks's subsumption architecture. Subsumption by itself was not enough, but it broke the logjam on the then-dominant planner-centric approach, which was doomed to fail. In that respect, subsumption was the precursor to Waymo, and that took less than 40 years. I was once a skeptic, but I now see a pretty clear path to AGI. It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years.

Retric 6 days ago | parent | next [-]

> no one knew how to even convincingly fake a natural language interaction.

There was some decent attempts at the turing test given limited subject matter long before LLM’s. As in people looking at the conversation where unsure if one of the parties was a computer. It’s really interesting to read some of those transcripts.

LLM’s actually score worse one some of those tests. Of course they do a huge range of other things, but it’s worth understanding both their strengths and many weaknesses.

kibwen 6 days ago | parent | prev | next [-]

> It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years.

Meanwhile, even after the infamous LK-99 fiasco (which gripped this forum almost more than anywhere else) was exposed as an overblown nothingburger, I still had seemingly-intelligent people telling me with all seriousness that the superconductor breakthrough had a 50% chance of happening within the next year. People are absolutely, terminally terrible at estimating the odds of future events that are surrounded by hype.

seanmcdirmid 6 days ago | parent | prev | next [-]

I thought Waymo was much more ML than logical rules based subsumption? I’m not sure it’s possible to do more than simple robotics without jumping into ML, I guess maybe if you had high level rules prioritized via subsumption but manipulating complex ML-trained sensors and actuators.

lisper 6 days ago | parent [-]

Yes, that's right. The ostensible idea behind subsumption is dead (because it was wrong). But what subsumption did was open up the possibility of putting the AI into the run-time feedback loop rather than the deliberative planning, and that is what all robotic control architectures do today.

zppln 5 days ago | parent | prev [-]

> clear path to AGI

What are the steps?

lisper 5 days ago | parent [-]

It's not really about "steps", it's about getting the architecture right. LLMs by themselves are missing two crucial ingredients: embodiment and feedback. The reason they hallucinate is that they have no idea what the words they are saying mean. They are like children mimicking other people. They need to be able to associate the words with some kind of external reality. This could be either the real world, or a virtual world, but they need something that establishes an objective reality. And then they need to be able to interact with that world, poke at it and see what it does and how it behaves, and get feedback regarding whether their actions were appropriate or not.

If I were doing this work, I'd look at a rich virtual environment like Minecraft or simcity or something like that. But it could also be coq or a code development environment.

bryanrasmussen 5 days ago | parent [-]

if they were able to associate with some sort of external reality will that prevent hallucination or just being wrong. Humans hallucinate and humans are wrong, perhaps being able to have intelligence without these qualities is the impossibility.

lisper 5 days ago | parent [-]

It's certainly possible that computers will suffer from all the same foibles that humans do, but we have a lot of evolutionary baggage that computers don't, so I don't see any fundamental reason why AGIs could not transcend those limitations. The only way to know is to do the experiment.

kibwen 6 days ago | parent | prev [-]

> nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible

Indeed, and at the same the breakthroughs are vastly outnumbered by ideas which had plausible sounding counterarguments which turned out to be correct. Which is to say, the burden of proof is on the people making claims that something implausible-sounding is plausible.

lisper 6 days ago | parent [-]

But QC is quite plausible. There is no theoretical constraint that makes it impossible. It really is just an engineering problem at this point.

kibwen 4 days ago | parent [-]

But the distinction that we're trying to make here is that people hear "plausible in theory" and think "plausible in practice within the timespan of human civilization", which does not follow.

I'm not trying to say anything about whether or not a CRQC will ever be built. I'm also not trying to say that pursuing PQC in the short term is a bad idea. But what I am saying is that the burden of proof remains on the believers to show that the engineering challenges are more than theoretically surmountable.

lisper 4 days ago | parent [-]

Yes, of course that is true. When I said that QC is "just an engineering problem" I did not mean to imply that it was straightforward. It's not. It's a Really Really Hard engineering problem with a lot of unknowns. It might turn out to be like fusion, perpetually 10-20 years away. Or it might turn out to be like the blue LED, seemingly impossible until someone figured out how to do it. I think you'd be foolish to bet your life savings on it either way.