| ▲ | stavros 3 hours ago | |||||||||||||||||||||||||||||||||||||||||||
Saying that the fundamental limitations are things like counting the number of rs in strawberry is boring, though. That's how tokens work and it's trivial to work around. Talking about how they find it hard to say they aren't sure of something is a much more interesting limitation to talk about, for example. | ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | datsci_est_2015 2 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
> Talking about how they find it hard to say they aren't sure of something is a much more interesting limitation to talk about, for example. Sure, thank you for steelmanning my argument. I didn’t think I needed to actually spell out all of the fundamental limitations of LLMs in this specific thread. They are spoken at length across the web, but are often met with pushback, which was my entire point. Here’s another one: LLMs do not have a memory property. Shut off the power and turn it back on and you lose all context. Any “memory” feature implemented by companies that sell LLM wrappers are a hack on top of how LLMs work, like seeding a context window before letting the user interact with the LLM. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||