▲ | pyman 2 days ago | ||||||||||||||||||||||||||||||||||
> something surprising that can happen during the process, which is that, for sufficiently similar models, behaviour can be transferred from student to teacher By "behaviour" they mean data and pattern matching, right? Alan Turing figured that out in the 1940s. LLMs aren't black boxes doing voodoo, like we like to tell politicians and regulators. They're just software processing massive amounts of data to find patterns and predict what comes next. It looks magical, but it's maths and stats, not magic. This post is just selling second-hand ideas. And for those of us outside the US who spend all day reading scientific papers, sorry Anthropic, we're not buying it. | |||||||||||||||||||||||||||||||||||
▲ | ben_w 2 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
> By "behaviour" they mean data and pattern matching, right? Alan Turing figured that out in the 1940s. That's like saying Da Vinci figured out heavier-than-air flight. Useful foundation, obviously smart and on the right track, still didn't actually do enough to get all the credit for that. > It looks magical, but it's maths and stats, not magic. People keep saying "AI isn't magic, it's just maths" like this is some kind of gotcha. Turning lead into gold isn't the magic of alchemy, it's just nucleosynthesis. Taking a living human's heart out without killing them, and replacing it with one you got out a corpse, that isn't the magic of necromancy, neither is it a prayer or ritual to Sekhmet, it's just transplant surgery. And so on: https://www.lesswrong.com/posts/hAwvJDRKWFibjxh4e/it-isn-t-m... Even with access to the numbers and mechanisms, the inner workings of LLMs are as clear as mud and still full of surprises. Anthropic's work was, to many people, one such surprise. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | rcxdude a day ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
I don't think Alan Turing would have predicted the full sentence that I wrote there. The first half is not the interesting or surprising part! And of course it's not magic, but mathematics does in fact contain a lot of things we don't actually understand yet, and system like LLMs are in general something we don't have particularly robust mathematical frameworks for relating their structure to the observed behaviour (compared to other, much simpler, structures). |