| ▲ | PunchyHamster 18 hours ago | ||||||||||||||||||||||||||||||||||||||||
I dunno, many people have that weird, unfounded trust in what AI says, more than in actual human experts it seems | |||||||||||||||||||||||||||||||||||||||||
| ▲ | bilbo0s 18 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||
Because AI, or rather, an LLM, is the consensus of many human experts as encoded in its embedding. So it is better, but only for those who are already expert in what they're asking. The problem is, you have to know enough about the subject on which you're asking a question to land in the right place in the embedding. If you don't, you'll just get bunk. (I know it's popular to call AI bunk "hallucinations" these days, but really if it was being spouted by a half wit human we'd just call it "bunk".) So you really have to be an expert in order to maximize your use of an LLM. And even then, you'll only be able to maximize your use of that LLM in the field in which your expertise lies. A programmer, for instance, will likely never be able to ask a coherent enough question about economics or oncology for an LLM to give a reliable answer. Similarly, an oncologist will never be able to give a coherent enough software specification for an LLM to write an application for him or her. That's the achilles heel of AI today as implemented by LLMs. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||