▲ | Robin_Message a day ago | |
The weights are aware of the end goal etc. But the model does not have access to these weights in a meaningful way in the chain of thought model. So the model thinks ahead but cannot reason about it's own thinking in a real way. It is rationalizing, not rational. | ||
▲ | Zee2 a day ago | parent | next [-] | |
I too have no access to the patterns of my neuron's firing - I can only think and observe as the result of them. | ||
▲ | senordevnyc a day ago | parent | prev [-] | |
So the model thinks ahead but cannot reason about its own thinking in a real way. It is rationalizing, not rational. My understanding is that we can’t either. We essentially make up post-hoc stories to explain our thoughts and decisions. |