Remix.run Logo
bayindirh 2 days ago

>> Self-driving cars don't use LLMs, so I don't know how any rational analysis can claim that the analogy is valid.

Doesn't matter, because if we're talking about AI models, no (type of) model reaches 100% linearly, or 100% ever. For example, recognition models run with probabilities. Like Tesla's Autopilot (TM), which loves to hit rolled-over vehicles because it has not seen enough vehicle underbodies to classify it.

Same for scientific classification models. They emit probabilities, not certain results.

>> Sure, but the question is not "how long does it take for LLMs to get to 100%"

I never claimed that a model needs to reach a proverbial 100%.

>> The question is, how long does it take for them to become as good as, or better than, humans.

They can be better than humans for certain tasks. They are actually better than humans in some tasks since 70s, but we like to disregard them to romanticize current improvements, but I don't believe current or any generation of AIs can be better than humans in anything and everything, at once.

Remember: No machine can construct something more complex than itself.

>> And that threshold happens way before 100%.

Yes, and I consider that "treshold" as "complete", if they can ever reach it for certain tasks, not "any" task.