▲ | ezst 20 hours ago | |
We have been in the phase of diminishing returns for years with LLMs now. There is no more data to train them on. The hallucinations are baked in at a fundamental level and they have no ability to emulate "reasoning" past what's already in their training data. This is not a matter of opinion. |