▲ | lisper 6 days ago | |||||||||||||||||||||||||
No, LLMs are a real breakthrough even if they are not by themselves reliable enough to produce a commercially viable application. Before LLMs, no one knew how to even convincingly fake a natural language interaction. I see LLMs as analogous to Rodney Brooks's subsumption architecture. Subsumption by itself was not enough, but it broke the logjam on the then-dominant planner-centric approach, which was doomed to fail. In that respect, subsumption was the precursor to Waymo, and that took less than 40 years. I was once a skeptic, but I now see a pretty clear path to AGI. It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years. | ||||||||||||||||||||||||||
▲ | Retric 6 days ago | parent | next [-] | |||||||||||||||||||||||||
> no one knew how to even convincingly fake a natural language interaction. There was some decent attempts at the turing test given limited subject matter long before LLM’s. As in people looking at the conversation where unsure if one of the parties was a computer. It’s really interesting to read some of those transcripts. LLM’s actually score worse one some of those tests. Of course they do a huge range of other things, but it’s worth understanding both their strengths and many weaknesses. | ||||||||||||||||||||||||||
▲ | kibwen 6 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
> It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years. Meanwhile, even after the infamous LK-99 fiasco (which gripped this forum almost more than anywhere else) was exposed as an overblown nothingburger, I still had seemingly-intelligent people telling me with all seriousness that the superconductor breakthrough had a 50% chance of happening within the next year. People are absolutely, terminally terrible at estimating the odds of future events that are surrounded by hype. | ||||||||||||||||||||||||||
▲ | seanmcdirmid 6 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
I thought Waymo was much more ML than logical rules based subsumption? I’m not sure it’s possible to do more than simple robotics without jumping into ML, I guess maybe if you had high level rules prioritized via subsumption but manipulating complex ML-trained sensors and actuators. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | zppln 6 days ago | parent | prev [-] | |||||||||||||||||||||||||
> clear path to AGI What are the steps? | ||||||||||||||||||||||||||
|