Remix.run Logo
BrenBarn 2 days ago

How do you think that LLMs will help that?

mfkhalil 2 days ago | parent [-]

Because LLMs understand language, we can start building algorithms that respond to what users say they want. Instead of reverse-engineering user intent from behavior, you can just tell a system “more of X, less of Y” and it listens. Way more flexible than hard-coded workflows.

BrenBarn 2 days ago | parent [-]

Interesting. That doesn't align with my experience with LLMs. I tend to find "smarter" interfaces (like LLM-based ones) more frustrating because they are black boxes and I find myself struggling to understand how to get what I want from them. I've had a fair number of maddening conversations with LLMs where I ask them for something and they just regurgitate non-answers back over and over.

What I prefer is interfaces that are more systematic and based on comprehensible principles. Like, for search (as someone mentioned in another comment), I want to be able to search for pages (or records, or whatever) that contain the text I searched for. I don't want an interface that tries to understand what I mean, I just want it to use the data I give it in a way that's deterministic enough that I can figure out how to make it do what I want.

mfkhalil 2 days ago | parent [-]

I think in a lot of cases that's because the meta with LLMs right now is to "have them do things for you", which generally means that obfuscating what's actually happening behind the scenes can make them seem "smarter" to the average user. Also, engineers are used to full control over deterministic input-output pipelines, which is a framework they try to force on LLM applications that fails miserably for the reasons you've listed.

In my opinion, the best applications of LLM UX will have full clarity for the end user (something we're trying to do with MatterRank). The non-determinism should be something the user can control to get better results, not something the engineer has prompted that takes control away from the user.

Now, if the use case you're looking for is "give me results with x text", then yes I agree with you that LLMs are just getting in the way. But that's not always the case.