▲ | iLoveOncall a day ago | |
It has absolutely nothing to do with reasoning, and I don't understand how anyone could think it's"close enough". Reasoning models are simply answering the same question twice with a different system prompt. It's a normal LLM with an extra technical step. Nothing else. |