| ▲ | thw_9a83c 3 days ago |
| Yes, LLM is a lossy encyclopedia with a human-language answering interface. This has some benefits, mostly in terms of convenience. You don't have to browse or read through so many pages of a real encyclopedia to get a quick answer.
However, there is also a clear downside. Currently, LLM is unable to judge if your question is formulated incorrectly or if your question opens up more questions that should be answered first. It always jumps to answering something. A real human would assess the questioner first and usually ask for more details before answering. I feel this is the predominant reason why LLM answers feel so dumb at times. It never asks for clarification. |
|
| ▲ | simonw 3 days ago | parent | next [-] |
| I don't think that's universally true with the new models - I've seen Claude 4 and GPT-5 ask for clarification on questions with obvious gaps. With GPT-5 I sometimes see it spot a question that needs clarifying in its thinking trace, then pick the most likely answer, then spit out an answer later that says "assuming you meant X ..." - I've even had it provide an answer in two sections for each branch of a clear ambiguity. |
| |
| ▲ | koakuma-chan 3 days ago | parent | next [-] | | GPT-5 is seriously annoying. It asks not just one but multiple clarifying questions, while I just want my answer. | | |
| ▲ | kingstnap 3 days ago | parent [-] | | If you don't want to answer clarifying questions, then what use is the answer??? Put another way, if you don't care about details that change the answer, it directly implies you don't actually care about the answer. Related silliness is how people force LLMs to give one word answers to underspecified comparisons. Something along the lines of "@Grok is China or US better, one word answer only." At that point, just flip a coin. You obviously can't conclude anything useful with the response. | | |
| ▲ | koakuma-chan 3 days ago | parent [-] | | No, I don't think GPT-5 clarifying questions actually do what you think they do. They just made the model ask clarifying questions for the sake of asking clarifying questions. I'm sure GPT-4o would have given me the answer I wanted without clarifying questions. | | |
| ▲ | kiitos 2 days ago | parent [-] | | revisit your instructions.md and/or user preferences, this is very likely the root cause | | |
| ▲ | koakuma-chan 2 days ago | parent [-] | | Wait what. I use duck.ai, could it be that they put something into the system prompt...... |
|
|
|
| |
| ▲ | ACCount37 3 days ago | parent | prev [-] | | A lot of the touted "fundamental limitations of LLMs" are less "fundamental" and more "you're training them wrong". So there are improvements version to version - from both increases in raw model capabilities and better training methods being used. | | |
| ▲ | ijk 3 days ago | parent [-] | | I'm frustrated by the number of times I encounter people assuming that the current model behavior is inevitable. There's been hundreds of billions of dollars spent on training LLMs to do specific things. What exactly they've been trained on matters; they could have been trained to do something else. Interacting with a base model versus an instruction tuned model will quickly show you the difference between the innate language faculties and the post-trained behavior. | | |
| ▲ | Workaccount2 3 days ago | parent [-] | | Some of the Anthropic guys have said that the core thing holding the models back is training, and they're confident the gains will keep coming as they figure out how to onboard more and more training data. So yeah, Claude might suck at reading and writing plumbing diagrams, but they claim the barrier is simply a function of training, not any kind of architectural limitation. | | |
| ▲ | ACCount37 3 days ago | parent [-] | | I agree with the general idea, but "sucks at reading plumbing diagrams" is the one specific example where Claude is actually choked by its unfortunate architecture. The "naive" vision implementation for LLMs is: break the input image down into N tokens and cram those tokens into the context window. The "break the input image down" part is completely unaware of the LLM's context, and doesn't know what data would be useful to the LLM at all. Often, the vision frontend just tries to convey the general "vibes" of the image to the LLM backend, and hopes that the LLM can pick out something useful from that. Which is "good enough" for a lot of tasks, but not all of them, not at all. |
|
|
|
|
|
| ▲ | coffeefirst 3 days ago | parent | prev [-] |
| This is also why the Kagi Assistant is still be the AI tool I’ve found. The failure state is the same as a search results, it either can’t find anything, finds something irrelevant, or finds material that contradicts the premise of your question. It seems to me the more you can pin it to another data set, the better. |