Remix.run Logo
snaking0776 5 hours ago

As someone actively researching in the neuroscience field these ideas are increasingly questionable. They do do a decent job of job of predicting neural data depending on your definition and if you compare them to hand built sets of features but we’re actually not even sure that will stay true. Especially in vision we already know that as models have scaled up they actually diverge more from humans and use quite different strategies. If you want them to act like humans or better reflect neural data you have to actively shape the training process to make that happen. There’s less we know about the language side of things currently though as that part of the field hasn’t yet really figured out exactly what they’re looking at yet because we generally know less about language in the brain vs vision. I think most vision scientists are on board with the idea that these things have really been diverging and have to be coerced to be useful. Language it’s more up in the air but there’s a growing wave of papers lately that seem to call the human LLM alignment idea into question. Personally I think the platonic representation idea is just a function of the convergence of training methods, data, and architectures all of these different labs are using. If you look at biological brains across species and even individuals within a species you see an incredible variety of strategies and representations that it seems ridiculous to me that anyone would suggest that there’s some base way to represent reality that is shared across everyone and every species. Here’s some articles that may be of interest if you’re curious:

[1] https://arxiv.org/pdf/2211.04533 [2] https://www.nature.com/articles/s41586-025-09631-6 [3] https://www.biorxiv.org/content/10.1101/2025.03.09.642245v1