Remix.run Logo
phoe-krk 9 hours ago

> "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

If one decided to paint a school's interior with toxic paint, it's not "the paint poisoned them on its own", it's "someone chose to use a paint that can poison people".

Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.

Muromec 7 hours ago | parent | next [-]

>Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.

What if I hire you (instead of LLM) to summarize the reports and you decide to email the competitors? What if we work in the industry where you have to be sworn in with an oath to protect secrecy? What if I did (or didn't) check with the police about your previous deeds, but it's first time you emailed competitors? What if you are a schizo that heard God's voice that told you to do so and it's the first episode you ever had?

phoe-krk an hour ago | parent [-]

The difference is LLMs are known to regularly and commonly hallucinate as their main (and only) way of internal functioning. Human intelligence, empirically, is more than just a stochastic probability engine, therefore has different standards applied to it than whatever machine intelligence currently exists.

im3w1l 9 hours ago | parent | prev [-]

> otherwise the concept of responsibility loses all value.

Frankly, I think that might be exactly where we end up going. Finding a responsible person to punish is just a tool we use to achieve good outcomes, and if scare tactics is no longer applicable to the way we work, it might be time to discard it.

phoe-krk 9 hours ago | parent [-]

A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.

It's scary that a nuclear exit starts looking like an enticing option when confronted with that.

direwolf20 7 hours ago | parent | next [-]

I saw some people saying the internet, particularly brainrot social media, has made everyone mentally twelve years old. It feels like it could be true.

Twelve–year–olds aren't capable of dealing with responsibility or consequence.

Muromec 7 hours ago | parent | prev | next [-]

>A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.

That value proposition depends entirely on whether there is also an upside to all of that. Do you actually need truth, meaning, responsibility and consequences while you are tripping on acid? Do you even need to be alive and have a physical organic body for that? What if Ikari Gendo was actually right and everyone else are assholes who don't let him be with his wife.

im3w1l 8 hours ago | parent | prev [-]

Ultimately the goal is to have a system that prevents mistakes as much as possible adapts and self-corrects when they do happen. Even with science we acknowledge that mistakes happen and people draw incorrect conclusions, but the goal is to make that a temporary state that is fixed as more information comes in.

I'm not claiming to have all the answers about how to achieve that, but I am fairly certain punishment is not a necessary part of it.