Remix.run Logo
eslaught 2 days ago

Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.

I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.

pipes 2 days ago | parent [-]

I don't understand how the placebo effect is a human bias. Is it?

wongarsu 2 days ago | parent [-]

At least in some instances you could frame it that way: You believe that doctors and medicine are effective at treating disease, so when you are sick and a doctor gives you a bottle of sugar pills and you take them, you now interpret your state through the lens that you should feel better. A bias on how you perceive your condition

That's not all that the placebo effect is. But it's probably the aspect that best fits the framing as bias

literalAardvark a day ago | parent [-]

It's much more than a bias.

You actually get better through placebo, as long as there's a pathway to it that is available to your body.

It's a really weird effect.

The fight isn't against triggering placebo, it's against letting it muddle study results.

eager_learner a day ago | parent [-]

I really love the back-and-forth in this mini-thread, I learned a lot about good thinking skills here. Thanks everyone.