▲ | kazinator 4 days ago | ||||||||||||||||
The problem with your low-effort retort is that, for example, the brain can wield language without having to scan anywhere near hundreds of terabytes of text. People acquire language from vastly fewer examples, and are able to infer/postulate rules, and articulate the rules. We don't know how. While there may be activity going on in the brain interpretable as high-dimensional functions mapping inputs to outputs, you are not doing everything with just one fixed function evaluating static weights from a feed-forward network. If it is like neural nets, it might be something like numerous models of different types, dynamically evolving and interacting. | |||||||||||||||||
▲ | Kuinox 3 days ago | parent [-] | ||||||||||||||||
The problem with your answer is that you make affirmations using logical fallacies. We both don't know how LLMs, and brains works to produce output. Any affirmation toward that without proof is affirming things without any basis. For example in this response: > the brain can wield language without having to scan anywhere near hundreds of terabytes of text. The amount of text we need to train an LLM only goes down, even 2 years ago it was showed you need less than a few millions words: https://tallinzen.net/media/papers/mueller_linzen_2023_acl.p... , in order to "acquire" english. | |||||||||||||||||
|