▲ | armchairhacker 12 hours ago | |
LLMs are good at discovery, since they know a lot, and can retrieve that knowledge from a query that simpler (e.g. regex-based) search engines with the same knowledge couldn't. For example, an LLM that is input a case may discover an obscure law, or notice a pattern in past court cases which establishes precedent. So they can be helpful to a real judge. Of course, the judge must check that the law or precedent aren't hallucinated, and apply to the case in the way the LLM claims. They should also prompt other LLMs and use their own knowledge in case the cited law/precedent contradicts others. There's a similar argument for scientists, mathematicians, doctors, investors, and other fields. LLMs are good at discovery but must be checked. | ||
▲ | amlib 7 hours ago | parent [-] | |
I would add that "hallucinations" aren't even the only failure mode a LLM can have, it can partially or completely miss what its supposed to find in the discovery process and lead you to believe that there just isn't anything worth pursuing in that particular venue. |