| ▲ | throwaway290 3 days ago | |||||||
Because people make them and people make them for profit. incentives make the product what it is. an LLM just needs to return something that is good enough for average person confidently to make money. if an LLM said "I don't know" more often it would make less money. because for the user this is means the thing they pay for failed at its job. | ||||||||
| ▲ | jmye a day ago | parent [-] | |||||||
> and why that’s a particularly difficult problem to solve The person I responded to, who seems like someone who definitely knows his stuff, made a comment that implied it was a technically difficult thing to do, not a trivially easy thing that's completely explained by "welp, $$$", which is why I asked. Your comments may point to why ChatGPT doesn't do it, but they're not really answering the actual question, in context. Especially where the original idea (not mine) was a lightweight LLM that can answer basic things, but knows when it doesn't know the answer and can go ask a heftier model for back-up. | ||||||||
| ||||||||