| ▲ | nvr219 14 hours ago | |
> If you walk a model like ChatGPT through that reasoning, you’ll often wind up in a spot where the model readily admits that a clear conclusion is logically entailed but it is absolutely forbidden from uttering it. Do you have an example of this? I want to try it. | ||