Remix.run Logo
roxolotl 3 days ago

> The same patterns emerge for all LLM answers to these questions. They converge on an optimal path through a thicket of concepts.

This so concisely explains most of the problems and power of these tools. If your goal is to get a reasonably good answer on a reasonably well trod subject you’re going to be very happy with their output. If you push them outside of that they quickly fall into either producing reasonable sounding but incorrect outputs, hallucinations, or failure.

throwawaylaptop 3 days ago | parent [-]

Considering a common trope is that most people are barely skilled at their main profession these days and almost clueless at everything else.. is this really that bad?

Most people need help with things that are trivial to experienced people in that field, but don't have the access or time or funds to get experts.

Most Americans can barely understand fractions and have no idea how a refrigerator works. If LLMs can help them troubleshoot their fridge and find out it's probably the circulating fan that needs help because their freezer works but not their fridge, isnt that mission accomplished, even if an HVAC tech can't use it to solve all his problems yet?

drewbeck 3 days ago | parent [-]

It depends what you want LLMs for, or what you think they should be for.

Imo you’re correct that they’re good enough if you scope your expectations appropriately. The problem is that the LLMs themselves don’t have any concept of their own limitations, leading to a kind of expectation creep. “Hey the bot was pretty good at helping me troubleshoot my fridge, lemme ask it for the names of the first 12 presidents I’m sure it’ll get that right.”

I think expectations and failure states for humans are far far easier to understand and suss out than for LLMs, and this is one of the robots’ biggest problems.