▲ | simonh 5 days ago | |
LLMs do have to be supervised by humans and do not perceive context or correct errors, and it’s not at all clear this is going to change any time soon. In fact it’s plausible that this is due to basic problems with the current technology. So if you’re right, sure, but I’m certainly not taking that as a given. | ||
▲ | cubefox 4 days ago | parent [-] | |
They do already correct errors since OpenAI introduced its o1 model. Since then the improvements have been significant. It seems practically certain that their capabilities will keep growing rapidly. Do you think AI will suddenly stagnate such that models are not much more capable in five years than they are now? That would be absurd. Look back five years, and we are practically in the AI stone age. |