| ▲ | TIPSIO 9 hours ago |
| Have you ever used any Anthropic AI product? You cannot literally do anything without big permissions, warnings, or annoying always-on popup warning you about safety. |
|
| ▲ | raesene9 8 hours ago | parent | next [-] |
| Claude code has a YOLO mode, and from what I've seen a lot of heavy users, use it. Fundamentally any security mechanism which relies on users to read and intelligently respond to approval prompts is doomed to fail over time, even if the prompts are well designed. Approval fatigue will kick in and people will just start either clicking through without reading, or prefer systems that let them disable the warnings (just as YOLO mode is a thing in Claude code) |
| |
| ▲ | TIPSIO 8 hours ago | parent [-] | | Yes it basically does! My point was that I really doubt Anthropic will miss making it clear to users that this is manipulating their computer | | |
| ▲ | fragmede 3 hours ago | parent [-] | | Users are asking it to manipulate their computer for them, so I don't think that parts being lost. |
|
|
|
| ▲ | hypfer 8 hours ago | parent | prev [-] |
| No, of course not.
Well.. apart from their API. That is a useful thing. But you're missing the point. It is doing all this stuff with user consent, yes. It's just that the user fundamentally cannot provide informed consent as they seem to be out of their minds. So yeah, technically, all those compliance checkboxes are ticked.
That's just entirely irrelevant to the point I am making. |
| |
| ▲ | Wowfunhappy 8 hours ago | parent [-] | | > It's just that the user fundamentally cannot provide informed consent The user is an adult. They are capable of consenting to whatever they want, no matter how irrational it may look to you. | | |
| ▲ | hypfer 8 hours ago | parent [-] | | Uh, yes? What does that refute? | | |
| ▲ | Wowfunhappy 8 hours ago | parent [-] | | You just said the user is incapable of providing informed consent. In any context, I really dislike software that prevents me from doing something dangerous in order to "protect" me. That's how we get iOS. The user is an adult, they can consent to this if they want to. If Anthropic is using dark patterns to trick them that's a different story--that wouldn't be informed consent--but I don't think that's happening here? | | |
| ▲ | hypfer 8 hours ago | parent [-] | | This is not about if people should be allowed to harm themselves though. Legally, yes. Yes, everyone can do that. The question though is if that is a good thing. Do we just want to look away when large orgs benefit from people not realizing that they're doing self-harm?
Do we want to ignore the larger societal implications of this? If you want to delete your rootfs, be my guest.
I just won't be cheering for a corp that tells you that you're brilliant and absolutely right for doing so. I believe it's a bad thing to frame this as a conflict between individual freedom and protecting the weak(est) parts of society. I don't think that anything good can come out of seeing the world that way. |
|
|
|
|