| ▲ | ACCount37 2 hours ago | |||||||||||||||||||||||||
Human brain's "pre-training" is evolution cramming way too much structure into it. It "learns from scratch" the way it does because it doesn't actually learn from scratch. I've seen plenty of wacky test-time training things used in ML nowadays, which is probably the closest to how the human brain learns. None are stable enough to go into the frontier LLMs, where in-context learning still reigns supreme. In-context learning is a "good enough" continuous learning approximatation, it seems. | ||||||||||||||||||||||||||
| ▲ | bigyabai 2 hours ago | parent [-] | |||||||||||||||||||||||||
> In-context learning is a "good enough" continuous learning approximatation, it seems. "it seems" is doing a herculean effort holding your argument up, in this statement. Say, how many "R"s are in Strawberry? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||