▲ | gary_0 5 days ago | ||||||||||||||||||||||
Determining a statement's truth (or if it's outside the system's knowledge) is an old problem in machine intelligence, with whole subfields like knowledge graphs and such, and it's NOT a problem LLMs were originally meant to address at all. LLMs are text generators that are very good at writing a book report based on a prompt and the patterns learned from the training corpus, but it's an entirely separate problem to go through that book report statement by statement and determine if each one is true/false/unknown. And that problem is one that the AI field has already spent 60 years on, so there's a lot of hubris in assuming you can just solve that and bolt it onto the side of GPT-5 by next quarter. | |||||||||||||||||||||||
▲ | red75prime 4 days ago | parent [-] | ||||||||||||||||||||||
> And that problem is one that the AI field has already spent 60 years on I hope you don't think that the solutions will be a closed-form expression. The solution should involve exploration and learning. The things that LLMs are instrumental in, you know. | |||||||||||||||||||||||
|