| ▲ | Scaevolus 3 hours ago | |||||||
They are bullshit machines because they do not have an internal mental model of truth like a human does. The flagship models bullshit less, but their fundamental architectures prevent having truth interfere with output. https://philosophersmag.com/large-language-models-and-the-co... | ||||||||
| ▲ | bensyverson 3 hours ago | parent [-] | |||||||
"Bullshit" is a human concept. LLMs do not work like the human brain, so to call their output "bullshit" is ascribing malice and intent that is simply not there. LLMs do not "think." But that does not mean they're not incredibly powerful and helpful in the right context. | ||||||||
| ||||||||