▲ | arduanika 6 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
What hinting? The comment was very clear. Arbitrarily good approximation is different from symbolic understanding. "if you can implement it in a brain" But we didn't. You have no idea how a brain works. Neither does anyone. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | mallowdram 6 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
We know the healthy brain is unpredictable. We suspect error minimization and prediction are not central tenets. We know the brain creates memory via differences in sharp wave ripples. That it's oscillatory. That it neither uses symbols nor represents. That words are wholly external to what we call thought. The authors deal with molecules which are neither arbitrary nor specific. Yet tumors ARE specific, while words are wholly arbitrary. Knowing these things should offer a deep suspicion of ML/LLMs. They have so little to do with how brains work and the units brains actually use (all oscillation is specific, all stats emerge from arbitrary symbols and worse: metaphors) that mistaking LLMs for reasoning/inference is less lexemic hallucination and more eugenic. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | Certhas 6 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
We didn't but somebody did so it's possible so probabilistic dynamics in high enough dimensions can do it. We don't understand what LLMs are doing. You can't go from understanding what a transformer is to understanding what an LLM does any more than you can go from understanding what a Neuron is to what a brain does. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | jjgreen 6 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
You can look at it, from the inside. |