▲ | keremk 3 days ago | |
Actually this is how LLMs (with reasoning) work as well. There is the pre-training which is analogous to the human brain getting trained by as much information as possible. There is a "yet unknown" threshold of what is enough pre-training and then the models can start reasoning and use tools and the feedback from it to do something that resembles to human thinking and reasoning. So if we don't pre-train our brains with enough information, we will have a weak base model. Again this is of course more of an analogy as we yet don't know how our brains really work but more and more it is looking remarkably aligned with this hypothesis. |