Remix.run Logo
AnimalMuppet 4 days ago

Here is my Bayesian version of this: If you have lies coming at you in high enough volume, you cannot update your priors at all, or else you will eventually come to believe the lie.

But then you have the problem: If you won't update your priors, and neither will someone else, but they have different priors than you, how can you talk to them?

But I'm maybe a bit less cynical than you. I think (maybe I'm kidding myself) that I can to some degree detect... something.

If someone built a parallel universe of false journal articles written by false experts, and then false news articles that referred to the false journal articles, and then false or sockpuppet users to point people to the news articles, that would be very hard to detect if it was done well. (A very persistent investigator might realize that the false experts either came from non-existent universities, or from universities that denied that the experts existed.) But often it isn't done well at all. Often it's "hey, here's this inadequately supported thing that disagrees with your priors that everybody is hyping". For me, "solidly disagrees with my solidly-held priors" is enough to give it a very skeptical look, and that's often enough to turn up the "inadequately supported" part. So I can at least sometimes detect something that looks like this, and avoid listening and believing it.

I'm hoping that that's enough to avoid "When does trying to become informed become an inevitable net negative because you're literally better off knowing nothing?" But we shall see.

jerf 4 days ago | parent [-]

It's the team of PhDs dedicated to me personally part that gets me.

In the current world, and the world for the next few years, the amount of human and CPU time that can be aimed at me personally is still low enough that what "real reality" generates outweighs the targeted content, and even the targeted content is clearly more accurately modeled by a lot of semi-smart people just throwing stuff at the wall and hoping to hook "someone" who may not be me personally. We talk about PhDs getting kids to click ads, and there's some truth to that, but at least there isn't anything like a human-brain-equivalent dedicated to getting my kids, personally, to click on ads. I have a lot of distrust of a lot of things but at least I can attack the content with the fact that needing to appeal broadly still keeps the content somewhat grounded in some sort of reality.

But over time, my personal brainpower isn't going to go up but the amount of firepower aimed directly at me is.

The good news is that it probably won't be unitary, just as the targeting today isn't unitary. But I'd like something better than that. And playing them against each other gets harder when the targeting becomes aware of that impact and they start compensating for that, because now they have the firepower to aim at me personally and do that sort of compensation.

9dev 4 days ago | parent [-]

If I may, I’d suggest reading the latest Harari book on this topic, Nexus. Great read with interesting ideas.