▲ | PeterStuer a day ago | |
Humans also post-rationalize the things their subconscious "gut feeling" came up with. I have no problem for a system to present a reasonable argument leading to a production/solution, even if that materially was not what happened in the generation process. I'd go even further and pose that probably requiring the "explanation" to be not just congruent but identical with the production would either lead to incomprehensible justifications or severely limited production systems. | ||
▲ | pixl97 a day ago | parent | next [-] | |
Now, at least in a well disciplined human, we can catch when our gut feeling was wrong when the 'create a reasonable argument' process fails. I guess I wonder how well a LLM can catch that and correct it's thinking. Now I've seen in some models where it figures out it's wrong, but then gets stuck in a loop. I've not really used the larger reasoning models much to see their behaviors. | ||
▲ | eab- a day ago | parent | prev [-] | |
yep, this post is full of this post-rationalization, for example. it's pretty breathtaking |