▲ | matwood 2 days ago | |||||||
> It's easy to assume it's always accurate when it generally is. But it's not always. So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day. | ||||||||
▲ | johannes1234321 2 days ago | parent | next [-] | |||||||
It's a question on judgement on the individual case. A documentation for a specific product I expect to be mostly right, but maybe miss the required detail. Some blog, by some author I haven't heard about I trust less. Some third party sites I give some trust, some less. AI is a mixed bag, while always implying authority on the subject. (While becoming submissive when corrected) | ||||||||
▲ | Rohansi 2 days ago | parent | prev [-] | |||||||
It's a marketing issue. LLMs are being marketed similar to Tesla's FSD - claims of PhD-level intelligence, AGI, artificial superintelligence, etc. set the expectation that LLMs should be smarter than (most of) us. Why would we have any reason to doubt the claims of something that is smarter than us? Especially when it is very confident about the way it is saying it. | ||||||||
|