Remix.run Logo
kingstnap 5 hours ago

> she said she was aware that DeepSeek had given her contradictory advice. She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority

With highly lucid people like the author's mom I'm not too worried about Dr. Deepseek. I'm actually incredibly bullish on the fact that AI models are, as the article describes, superhumanly empathetic. They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.

We don't want to throw the baby out with the bathwater, but there are obviously a lot of people who really cannot handle the seductivity of things that agree with them like this.

I do think there is pretty good potential in making good progress on this front in though. Especially given the level of care and effort being put into making chatbots better for medical uses and the sheer number of smart people working on the problem.

HPsquared 4 hours ago | parent | next [-]

If the computer is the bicycle of the mind, GenAI is a motor vehicle. Very powerful and transformative, but it's also possible to get into trouble.

thewebguyd 3 hours ago | parent | next [-]

A stark difference with that analogy is that with a bicycle, the human is still doing quite a bit of work themselves. The bicycle amplifies the human effort, whereas with a motor vehicle, the vehicle replaces the human effort entirely.

No strong opinion on if that's good or bad long term, as humans have been outsourcing portions of their thinking for a really long time, but it's interesting to think about.

snarky_dog 3 hours ago | parent | prev [-]

[dead]

MisterTea 3 hours ago | parent | prev | next [-]

> They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.

This is a strange way to talk about a computer program following its programming. I see no miracle here.

turtletontine 2 hours ago | parent [-]

I feel like I’ve seen more and more people recently fall for this trick. No, LLMs are not “empathetic” or “patient”, and no, they do not have emotions. They’re incredibly huge piles of numbers following their incentives. Their behavior convincingly reproduces human behavior, and they express what looks like human emotions… because their training data is full of humans expressing emotions? Sure, sometimes it’s helpful for their outputs to exhibit a certain affect or “personality”. But falling for the act, and really attributing human emotions to them seems, is alarming to me.

guntars 2 hours ago | parent [-]

There’s no trick. It’s less about what actually is going on inside the machine and more about the experience the human has. From that lens, yes, they are empathetic.

atoav 4 hours ago | parent | prev | next [-]

Well yes, but as an extremely patient person I can tell you that infinite patience doesn't come without its own problems. In certain social situations the ethically better thing to do is to actually to lose your patience, may it be to shake the person talking to you up, may it be to indicate they are going down a wrong path or whatnot.

I have experience with building systems to remove that infinite patience from chatbots and it does make interactions much more realistic.

Jhater 4 hours ago | parent | prev [-]

[dead]