▲ | lobochrome a day ago | |
"LLM just complete your prompt in a way that match their training data" "A LLM is smart enough to understand this" It feels like you're contradicting yourself. Is it _just_ completing your prompt, or is it _smart_ enough? Do we know if conscious thought isn't just predicting the next token? |