| ▲ | 152334H 9 hours ago | |||||||||||||||||||
Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies? How is a chatbot supposed to determine when a user fools even themselves about what they have experienced? What 'tough love' can be given to one who, having been so unreasonable throughout their lives - as to always invite scorn and retort from all humans alike - is happy to interpret engagement at all as a sign of approval? | ||||||||||||||||||||
| ▲ | rsynnott 8 hours ago | parent | next [-] | |||||||||||||||||||
> How is a chatbot supposed to determine when a user fools even themselves about what they have experienced? And even if it _could_, note, from the article: > Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found. The vendors have a perverse incentive here; even if they _could_ fix it, they'd lose money by doing so. | ||||||||||||||||||||
| ▲ | kibwen 8 hours ago | parent | prev | next [-] | |||||||||||||||||||
> Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies? Markets don't optimize for what is sensible, they optimize for what is profitable. | ||||||||||||||||||||
| ||||||||||||||||||||
| ▲ | isodev 8 hours ago | parent | prev | next [-] | |||||||||||||||||||
> clear thinking Most humans working in tech lack this particular attribute, let alone tools driven by token-similarity (and not actual 'thinking'). | ||||||||||||||||||||
| ||||||||||||||||||||
| ▲ | expedition32 8 hours ago | parent | prev [-] | |||||||||||||||||||
It's almost as if being a therapist is an actual job that takes years of training and experience! AI may one day rewrite Windows but it will never be counselor Troi. | ||||||||||||||||||||
| ||||||||||||||||||||