▲ | mfkhalil 2 days ago | |||||||
Because LLMs understand language, we can start building algorithms that respond to what users say they want. Instead of reverse-engineering user intent from behavior, you can just tell a system “more of X, less of Y” and it listens. Way more flexible than hard-coded workflows. | ||||||||
▲ | BrenBarn 2 days ago | parent [-] | |||||||
Interesting. That doesn't align with my experience with LLMs. I tend to find "smarter" interfaces (like LLM-based ones) more frustrating because they are black boxes and I find myself struggling to understand how to get what I want from them. I've had a fair number of maddening conversations with LLMs where I ask them for something and they just regurgitate non-answers back over and over. What I prefer is interfaces that are more systematic and based on comprehensible principles. Like, for search (as someone mentioned in another comment), I want to be able to search for pages (or records, or whatever) that contain the text I searched for. I don't want an interface that tries to understand what I mean, I just want it to use the data I give it in a way that's deterministic enough that I can figure out how to make it do what I want. | ||||||||
|