| ▲ | adrian_b 3 hours ago | ||||||||||||||||||||||||||||||||||
I agree that after discussions with a LLM you may be led to novel insights. However, such novel insights are not novel due to the LLM, but due to you. The "novel" insights are either novel only to you, because they belong to something that you have not studied before, or they are novel ideas that were generated by yourself as a consequence of your attempts to explain what you want to the LLM. It is very frequent for someone to be led to novel insights about something that he/she believed to already understand well, only after trying to explain it to another ignorant human, when one may discover that the previous supposed understanding was actually incorrect or incomplete. | |||||||||||||||||||||||||||||||||||
| ▲ | soulofmischief 2 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||
The point is that the combined knowledge/process of the LLM and a user (which could be another LLM!) led to it walking the manifold in a way that produced a novel distribution for a given domain. I talk with LLMs for hours out of the day, every single day. I'm deeply familiar with their strengths and shortcomings on both a technical and intuitive level. I push them to their limits and have definitely witnessed novel output. The question remains, just how novel can this output be? Synthesis is a valid way to produce novel data. And beyond that, we are teaching these models general problem-solving skills through RL, and it's not absurd to consider the possibility that a good enough training regimen cannot impart deduction/induction skills into a model that are powerful enough to produce novel information even via means other than direct synthesis of existing information. Especially when given affordances such as the ability to take notes and browse the web. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||