Remix.run Logo
pier25 4 hours ago

> Retain full human responsibility and accountability for any consequences arising from the use of AI systems

So if the tool doesn't do what it's supposed to be doing we should blame the user instead of the company that made the tool?

Sohcahtoa82 41 minutes ago | parent | next [-]

Your comment is a perfect example of not caring about nuance. More charitably, it comes from a place of naivete about how LLMs work.

LLMs are non-deterministic [0]. They can't be trusted to fully follow your prompts. As such, you have to be careful about what permissions they have.

Like...I use Claude Code. I allow it to run some shell commands that only read (grep, ls, find, etc.). I will never allow it to run Python code without checking with me first. Yeah, it slows me down when I have to answer its prompt for permission to run Python, but the alternative is outright dangerous.

Compare this with any other tool, say, something as simple as `rm`. I expect that if I call `rm some.file`, it will only delete that file. If it deletes anything else, that's absolutely the fault of the tool, and I should not bear any responsibility for mistakes the tool makes as long as my input was correct.

I do not give LLMs that same latitude. LLMs operate probabilistically and have far more degrees of freedom in how they interpret and act on your input, so you hold them (and yourself) to a different standard of scrutiny and accountability.

[0] Technically, LLMs are actually completely deterministic. Run any given input through the neural network, and you'll get the exact same output [1], but that output is a list of probabilities of the next potential token. Top-k sampling, temperature, and other options essentially randomize the chosen token, making them non-deterministic in practice, though APIs will often allow you to disable all that and make them deterministic.

[1] Even this statement isn't quite true because floating point math is not associative.

susam 4 hours ago | parent | prev | next [-]

You are quoting a point from my summary and extrapolating what my post might be saying.

Even in that quote, I do not say that the user must be responsible. The point is that responsibility and accountability should remain with some humans. Depending on the case, those humans may be the people who manufactured the tool, the people who deployed it or the people who took bad output from the tool and applied it to the real world.

Did you read the actual section at <https://susam.net/inverse-laws-of-robotics.html#non-abdicati...>? It has more nuance than what the summary alone can capture.

pier25 an hour ago | parent [-]

> Even in that quote, I do not say that the user must be responsible.

I didn't say that. I made a question so you could elaborate which human you were referring to.

CivBase 2 hours ago | parent | prev [-]

What do you think an LLM is "supposed" to do?

At the end of the day it's just a big weighted graph traversal. Its output is a result of many combined probabilities. It's not deterministic and even if it was the input range is so massive that it would be impossible to comprehensively test.

You cannot possibly know an LLM will do what you command it to. It's impossible by design. LLMs are inherently unpredictable. They can still be useful, but that unpredictability needs to be accounted for to use them safely.

pier25 an hour ago | parent [-]

> LLMs are inherently unpredictable.

Exactly my point.

If the tool is inherently unpredictable AI companies should either be held accountable for any mistakes or should not sell/market their services as if they were infallible.