| ▲ | niyikiza 10 hours ago |
| You're right, they should be responsible. The problem is proving it.
"I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures. And when sub-agents or third-party tools are involved, liability gets even murkier. Who's accountable when the action executed three hops away from the human?
The article argues for receipts that make "I didn't authorize that" a verifiable claim |
|
| ▲ | bulatb 10 hours ago | parent | next [-] |
| There's nothing to prove. Responsibility means you accept the consequences for its actions, whatever they are. You own the benefit? You own the risk. If you don't want to be responsible for what a tool that might do anything at all might do, don't use the tool. The other option is admitting that you don't accept responsibility, not looking for a way to be "responsible" but not accountable. |
| |
| ▲ | tossandthrow 10 hours ago | parent [-] | | Sounds good in theory, doesn't work in reality. Had it worked then we would have seen many more CEOs in prison. | | |
| ▲ | walt_grata 9 hours ago | parent | next [-] | | There being a few edge cases where it doesn't work in doesn't mean it doesn't work in the majority of cases and that we shouldn't try to fix the edge cases. | |
| ▲ | Muromec 7 hours ago | parent | prev | next [-] | | CEOs are like cars and immigrants. Both kill people all the time, but we choose to believe they are net positive to society, look the other way and try to put symbolic band aids here and there. The same may happen to AI or not. We can bite the bullet and say it's fine that it sometimes happens. We can ban the entire thing too if we feel the tradeoff not worth it | | |
| ▲ | direwolf20 7 hours ago | parent [-] | | You're not doing any favors to your hirability with those first two sentences. | | |
| ▲ | Muromec 6 hours ago | parent [-] | | The market is allmighty, but it's allmerciful as well, and thankully, not allknowing. |
|
| |
| ▲ | freejazz 9 hours ago | parent | prev | next [-] | | This isn't a legal argument and these conversations are so tiring because everyone here is insistent upon drawing legal conclusions from these nonsense conversations. | |
| ▲ | bulatb 9 hours ago | parent | prev | next [-] | | We're taking about different things. To take responsibility is volunteering to accept accountability without a fight. In practice, almost everyone is held potentially or actually accountable for things they never had a choice in. Some are never held accountable for things they freely choose, because they have some way to dodge accountability. The CEOs who don't accept accountability were lying when they said they were responsible. | |
| ▲ | NoMoreNicksLeft 9 hours ago | parent | prev [-] | | The veil of liability is built into statute, and it's no accident. Such so magic forcefield exists for you, though. |
|
|
|
| ▲ | LeifCarrotson 9 hours ago | parent | prev | next [-] |
| > "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures. No, it's trivial: "So you admit you uploaded confidential information to the unpredictable tool with wide capabilities?" > Who's accountable when the action executed three hops away from the human? The human is accountable. |
| |
| ▲ | pixl97 8 hours ago | parent | next [-] | | As the saying goes ---- A computer can never be held accountable Therefore a computer must never make a management decision | | |
| ▲ | direwolf20 7 hours ago | parent [-] | | That's when companies were accountable for their results and needed to push the accountability to a person to deter bad results. You couldn't let a computer make a decision because the computer can't be deterred by accountability. Now companies are all about doing bad all the time, they know they're doing it, and need to avoid any individual being accountable for it. Computers are the perfect tool to make decisions without obvious accountability. |
| |
| ▲ | gowld 8 hours ago | parent | prev | next [-] | | What if you carried a stack of papers between buildings on a windy day, and the papers blew away? | | | |
| ▲ | Muromec 7 hours ago | parent | prev [-] | | >The human is accountable. That's an orthodoxy. It holds for now (in theory and most of the time), but it's just an opinion, like a lot of other things. Who is accountable when we have a recession or when people can't afford whatever we strongly believe should be affordable? The system, the government, the market, late stage capitalism or whatever. Not a person that actually goes to jail. If the value proposition becomes attractive, we can choose to believe that the human is not in fact accountable here, but the electric shaitan is. We just didn't pray good enough, but did our best really. What else can we expect? |
|
|
| ▲ | phoe-krk 9 hours ago | parent | prev | next [-] |
| > "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures. If one decided to paint a school's interior with toxic paint, it's not "the paint poisoned them on its own", it's "someone chose to use a paint that can poison people". Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value. |
| |
| ▲ | Muromec 7 hours ago | parent | next [-] | | >Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value. What if I hire you (instead of LLM) to summarize the reports and you decide to email the competitors? What if we work in the industry where you have to be sworn in with an oath to protect secrecy? What if I did (or didn't) check with the police about your previous deeds, but it's first time you emailed competitors? What if you are a schizo that heard God's voice that told you to do so and it's the first episode you ever had? | | |
| ▲ | phoe-krk an hour ago | parent [-] | | The difference is LLMs are known to regularly and commonly hallucinate as their main (and only) way of internal functioning. Human intelligence, empirically, is more than just a stochastic probability engine, therefore has different standards applied to it than whatever machine intelligence currently exists. |
| |
| ▲ | im3w1l 9 hours ago | parent | prev [-] | | > otherwise the concept of responsibility loses all value. Frankly, I think that might be exactly where we end up going. Finding a responsible person to punish is just a tool we use to achieve good outcomes, and if scare tactics is no longer applicable to the way we work, it might be time to discard it. | | |
| ▲ | phoe-krk 9 hours ago | parent [-] | | A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along. It's scary that a nuclear exit starts looking like an enticing option when confronted with that. | | |
| ▲ | direwolf20 7 hours ago | parent | next [-] | | I saw some people saying the internet, particularly brainrot social media, has made everyone mentally twelve years old. It feels like it could be true. Twelve–year–olds aren't capable of dealing with responsibility or consequence. | |
| ▲ | Muromec 7 hours ago | parent | prev | next [-] | | >A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along. That value proposition depends entirely on whether there is also an upside to all of that. Do you actually need truth, meaning, responsibility and consequences while you are tripping on acid? Do you even need to be alive and have a physical organic body for that? What if Ikari Gendo was actually right and everyone else are assholes who don't let him be with his wife. | |
| ▲ | im3w1l 8 hours ago | parent | prev [-] | | Ultimately the goal is to have a system that prevents mistakes as much as possible adapts and self-corrects when they do happen. Even with science we acknowledge that mistakes happen and people draw incorrect conclusions, but the goal is to make that a temporary state that is fixed as more information comes in. I'm not claiming to have all the answers about how to achieve that, but I am fairly certain punishment is not a necessary part of it. |
|
|
|
|
| ▲ | QuadmasterXLII 10 hours ago | parent | prev | next [-] |
| This doesn't seem conceptually different from running [ $[ $RANDOM % 6] = 0 ] && rm -rf / || echo "Click"
on your employer's production server, and the liability doesn't seem murky in either case |
| |
| ▲ | staticassertion 9 hours ago | parent [-] | | What if you wrote something more like: # terrible code, never use ty
def cleanup(dir):
system("rm -rf {dir}")
def main():
work_dir = os.env["WORK_DIR"]
cleanup(work_dir)
and then due to a misconfiguration "$WORK_DIR" was truncated to be just "/"?At what point is it negligent? | | |
| ▲ | direwolf20 9 hours ago | parent [-] | | This is not hypothetical. Steam and Bumblebee did it. | | |
| ▲ | extraduder_ire 9 hours ago | parent | next [-] | | That was the result of an additional space in the path passed to rm, IIRC. Though rm /$TARGET where $TARGET is blank is a common enough footgun that --preserve-root exists and is default. | | | |
| ▲ | a_t48 8 hours ago | parent | prev [-] | | Bungie, too, in a similar way. |
|
|
|
|
| ▲ | groby_b 9 hours ago | parent | prev | next [-] |
| "And when sub-agents or third-party tools are involved, liability gets even murkier." It really doesn't. That falls straight on Governance, Risk, and Compliance. Ultimately, CISO, CFO, CEO are in the line of fire. The article's argument happens in a vacuum of facts. The fact that a security engineer doesn't know that is depressing, but not surprising. |
| |
| ▲ | Muromec 7 hours ago | parent [-] | | >The fact that a security engineer doesn't know that is depressing, but not surprising. That's a very subtle guinea pig joke right there. |
|
|
| ▲ | freejazz 9 hours ago | parent | prev | next [-] |
| The burden of substantiating a defense is upon the defendant and no one else. |
|
| ▲ | groby_b 9 hours ago | parent | prev [-] |
| "Our tooling was defective" is not, in general, a defence against liability. Part of a companys obligations is to ensure all its processes stay within lawful lanes. "Three months later [...] But the prompt history? Deleted. The original instruction? The analyst’s word against the logs." One, the analysts word does not override the logs, that's the point of logs. Two, it's fairly clear the author of the fine article has never worked close to finance. A three month retention period for AI queries by an analyst is not an option. SEC Rule 17a-4 & FINRA Rule 4511 have entered the chat. |
| |
| ▲ | niyikiza 9 hours ago | parent [-] | | Agree ... retention is mandatory. The article argues you should retain authorization artifacts, not just event logs. Logs show what happened. Warrants show who signed off on what | | |
|