▲ | ACCount37 5 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The primary source is: measured LLM performance on once-human-exclusive tasks - such as high end natural language processing or commonsense reasoning. Those things were once thought to require a human mind - clearly, not anymore. Human commonsense knowledge can be both captured and applied by a learning algorithm trained on nothing but a boatload of text. But another important source is: loads and loads of mech interpret research that tried to actually pry the black box open and see what happens on the inside. This found some amusing artifacts - such as latent world models that can be extracted from the hidden state, or neural circuits corresponding to high level abstracts being chained together to obtain the final outputs. Very similar to human "abstract thinking" in function - despite being implemented on a substrate of floating point math and not wet meat. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | freejazz 5 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I haven't seen LLMs perform common sense reasoning. Feel free to share some links. Your post reads like anthropomorphized nonsense. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|