| ▲ | qsort 3 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I agree saying "they don't think" and leaving it at that isn't particularly useful or insightful, it's like saying "submarines don't swim" and refusing to elaborate further. It can be useful if you extend it to "they don't think like you do". Concepts like finite context windows, or the fact that the model is "frozen" and stateless, or the idea that you can transfer conversations between models are trivial if you know a bit about how LLMs work, but extremely baffling otherwise. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | gowld 3 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> Concepts like > finite context windows like a human has > or the fact that the model is "frozen" and stateless, much like a human adult. Models get updated at a slower frequency than humans. AI systems have access to fetch new information and store it for context. > or the idea that you can transfer conversations between models are trivial because computers are better-organized than humanity. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||