▲ | bilbo-b-baggins 6 days ago | |
Actually internally we do know what’s going on these days. Anthropic put out a white paper detailing how Claude can’t math but many math examples are out there so Claude can fake it. I wish you’d stop magic LLMs some kind of magic thing they aren’t. | ||
▲ | hodgehog11 4 days ago | parent [-] | |
I work in the theory of deep learning, so I can say with some authority that while we know a really good number of things, and are able to probe the internals much better than most of the public realises, when it comes to philosophical questions that compare their nature with humans, we have a long, long way to go. The biggest problem is that we're still trying to work out what it is we even want to know that will tell us whether we have achieved AGI or not. Linear probes and autoencoders have been useful, but we're quickly reaching the limits of those techniques. And don't even get me started on approaches to theory that operate by cherry-picked examples. Anthropic's contributions have been beneficial to the field, but are far from conclusive. |