Remix.run Logo
sho_hn 5 hours ago

Not in this review: Also the record year in intelligent systems aiding in and prompting human users into fatal self-harm.

Will 2026 fare better?

simonw 4 hours ago | parent | next [-]

I really hope so.

The big labs are (mostly) investing a lot of resources into reducing the chance their models will trigger self-harm and AI psychosis and suchlike. See the GPT-4o retirement (and resulting backlash) for an example of that.

But the number of users is exploding too. If they make things 5x less likely to happen but sign up 10x more people it won't be good on that front.

andai 4 hours ago | parent | prev | next [-]

Also essential self-fulfilment.

But that one doesn't make headlines ;)

sho_hn 4 hours ago | parent [-]

Sure -- but that's fair game in engineering. I work on cars. If we kill people with safety faults I expect it to make more headlines than all the fun roadtrips.

What I find interesting with chat bots is that they're "web apps" so to speak, but with safety engineering aspects that type of developer is typically not exposed to or familiar with.

simonw 4 hours ago | parent [-]

One of the tough problems here is privacy. AI labs really don't want to be in the habit of actively monitoring people's conversations with their bots, but they also need to prevent bad situations from arising and getting worse.

walt_grata 4 hours ago | parent [-]

Until AI labs have the equivalent of an SLA for giving accurate and helpful responses it don't get better. They've not even able to measure if the agents work correctly and consistently.

measurablefunc 4 hours ago | parent | prev | next [-]

The people working on this stuff have convinced themselves they're on a religious quest so it's not going to get better: https://x.com/RobertFreundLaw/status/2006111090539687956

inquirerGeneral 3 hours ago | parent | prev [-]

[dead]