| ▲ | jygg4 14 hours ago | |||||||||||||
The issue with llm’s is trust. I don’t see that ever going away. Humans have learned to trust other humans over a large time scale with rules in place to control behaviour. | ||||||||||||||
| ▲ | furyofantares 4 hours ago | parent | next [-] | |||||||||||||
When the dust settles, for example if LLM's were to stop improving today, we would come to learn their exact capabilities, what they can do reliably and what they can't. Once we know what they can do well and how to get them to do it well, and what they can't, you could say we "trust" them to do the first category well and just stop trying to get it to do the second category. | ||||||||||||||
| ▲ | marcuschong 12 hours ago | parent | prev [-] | |||||||||||||
That's a big problem with very specific manifestations. My startup helps customers handle regulatory compliance, also by forwarding complex questions to a pool of consultants. We've compared now more than a hundred replies to that of GPT Pro, and the quality is roughly the same. Sometimes a little worse, sometimes a little better. Always more detailed. Never unacceptable. But how to convince our customers that we have the right technology and know how to use it appropriately? We're trying, but it's not easy. Part of that's accountability. In the event of the LLM producing rubbish, as rare as it may be, who is accountable? There is not a person and her reputation attached to it. | ||||||||||||||
| ||||||||||||||