Remix.run Logo
throwaway290 3 days ago

Because people make them and people make them for profit. incentives make the product what it is.

an LLM just needs to return something that is good enough for average person confidently to make money. if an LLM said "I don't know" more often it would make less money. because for the user this is means the thing they pay for failed at its job.

jmye a day ago | parent [-]

> and why that’s a particularly difficult problem to solve

The person I responded to, who seems like someone who definitely knows his stuff, made a comment that implied it was a technically difficult thing to do, not a trivially easy thing that's completely explained by "welp, $$$", which is why I asked. Your comments may point to why ChatGPT doesn't do it, but they're not really answering the actual question, in context.

Especially where the original idea (not mine) was a lightweight LLM that can answer basic things, but knows when it doesn't know the answer and can go ask a heftier model for back-up.

throwaway290 a day ago | parent [-]

I think that person should think that technically difficult thing that makes more money = gets solved and technically difficult thing that makes less money = doesn't get solved.

By the way model's don't "know". They autocomplete tokens.

> Your comments may point to why ChatGPT doesn't do it

Any commercial model which is most of them...