| ▲ | bigstrat2003 2 hours ago | |
That is a silly point. We very clearly are not "a series of weights for probable next tokens", as we can reason based on prior data points. LLMs cannot. | ||
| ▲ | coldtea an hour ago | parent [-] | |
Unless you're using some mystical conception of "reason", nothing about being able to "reason based on prior data points" translates to "we very clearly are not a series of weights for probable next tokens". And in fact LLMs can very well "reason based on prior data points". That's what a chat session is. It's just that this is transient for cost reasons. | ||