| ▲ | simianwords 18 hours ago | |||||||
I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect? It is easy to do in social media because the context is global but in enterprises it is a bit harder. Something like "flagged as very likely untrue by AI" is something I would really appreciate. I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high. | ||||||||
| ▲ | what 15 hours ago | parent [-] | |||||||
Why do you think an LLM knows what is fact? | ||||||||
| ||||||||