| ▲ | galaxyLogic 2 hours ago | |
Doesn't the big idea behind OpenClaw etc. come down to whether LLM knows what it doesn't know? If it knows it doesn't know something it can ask someone else, presumably some other LLM-agent, or actually a Reddit-like community of them. Just like people ask questions on Reddit? I'd prefer an LLM which asks from someone else if it doesn't know the answer, than one that a) pretends it has the correct answer, or b) assumes and tells me the answer is unknowable? I think it's a big idea. Why didn't they think about it earlier. | ||