| ▲ | gpm 4 hours ago | |||||||
Saying "some people use llms to spread lies therefore I don't trust any llms" is like saying "since people use people to spread lies therefore I don't trust any people". Regardless of whether or not you should trust llms this argument is clearly not proof of it. | ||||||||
| ▲ | no_wizard an hour ago | parent [-] | |||||||
Those are false equivalents. If a technology can’t reliably sort out what is a trustworthy source and filter out the rest than it’s not a truth worthy technology. There are tools after all. I should be able to trust a hammer if I use it correctly All this is also missing the other point: this proves that the narrative companies are selling about AI are not based on objective capabilities | ||||||||
| ||||||||