Remix.run Logo
susam 4 hours ago

I recently wrote a blog post where I argued that there are a few principles we should consistently follow when talking about AI: https://susam.net/inverse-laws-of-robotics.html

To summarise them:

1. Do not anthropomorphise AI systems.

2. Do not blindly trust the output of AI systems.

3. Retain full human responsibility and accountability for any consequences arising from the use of AI systems.

I would like to see the language around AI become less anthropomorphic and more technical. I believe that precise language encourages clear thinking and good judgement. If we treat AI like another tool and use language that reflects that, it will become abundantly obvious that in many cases, the responsibility of any 'mistake' made by the tool falls on the user of the tool.

But alas, ideas like this do not travel very far when I express them on my small website. It would help if more prominent personalities articulated these principles, so they become more widely adopted.

zahlman 3 hours ago | parent | next [-]

>1. Do not anthropomorphise AI systems.

This is maddeningly difficult IMX.

rglover an hour ago | parent [-]

Give it a name, but something non-human like "thingbot" or "tacosplosion."

"Hey tacosplosion, generate me an exploding taco image."

pier25 4 hours ago | parent | prev | next [-]

> Retain full human responsibility and accountability for any consequences arising from the use of AI systems

So if the tool doesn't do what it's supposed to be doing we should blame the user instead of the company that made the tool?

Sohcahtoa82 41 minutes ago | parent | next [-]

Your comment is a perfect example of not caring about nuance. More charitably, it comes from a place of naivete about how LLMs work.

LLMs are non-deterministic [0]. They can't be trusted to fully follow your prompts. As such, you have to be careful about what permissions they have.

Like...I use Claude Code. I allow it to run some shell commands that only read (grep, ls, find, etc.). I will never allow it to run Python code without checking with me first. Yeah, it slows me down when I have to answer its prompt for permission to run Python, but the alternative is outright dangerous.

Compare this with any other tool, say, something as simple as `rm`. I expect that if I call `rm some.file`, it will only delete that file. If it deletes anything else, that's absolutely the fault of the tool, and I should not bear any responsibility for mistakes the tool makes as long as my input was correct.

I do not give LLMs that same latitude. LLMs operate probabilistically and have far more degrees of freedom in how they interpret and act on your input, so you hold them (and yourself) to a different standard of scrutiny and accountability.

[0] Technically, LLMs are actually completely deterministic. Run any given input through the neural network, and you'll get the exact same output [1], but that output is a list of probabilities of the next potential token. Top-k sampling, temperature, and other options essentially randomize the chosen token, making them non-deterministic in practice, though APIs will often allow you to disable all that and make them deterministic.

[1] Even this statement isn't quite true because floating point math is not associative.

susam 4 hours ago | parent | prev | next [-]

You are quoting a point from my summary and extrapolating what my post might be saying.

Even in that quote, I do not say that the user must be responsible. The point is that responsibility and accountability should remain with some humans. Depending on the case, those humans may be the people who manufactured the tool, the people who deployed it or the people who took bad output from the tool and applied it to the real world.

Did you read the actual section at <https://susam.net/inverse-laws-of-robotics.html#non-abdicati...>? It has more nuance than what the summary alone can capture.

pier25 an hour ago | parent [-]

> Even in that quote, I do not say that the user must be responsible.

I didn't say that. I made a question so you could elaborate which human you were referring to.

CivBase 2 hours ago | parent | prev [-]

What do you think an LLM is "supposed" to do?

At the end of the day it's just a big weighted graph traversal. Its output is a result of many combined probabilities. It's not deterministic and even if it was the input range is so massive that it would be impossible to comprehensively test.

You cannot possibly know an LLM will do what you command it to. It's impossible by design. LLMs are inherently unpredictable. They can still be useful, but that unpredictability needs to be accounted for to use them safely.

pier25 an hour ago | parent [-]

> LLMs are inherently unpredictable.

Exactly my point.

If the tool is inherently unpredictable AI companies should either be held accountable for any mistakes or should not sell/market their services as if they were infallible.

tingletech 3 hours ago | parent | prev [-]

I wholeheartedly agree with these, and I think point 1 is a real danger.

An ai system can't lie, and it can't deliberately ignore your directions. The current frontier class does not have a model of the world or their action -- they live in a world of words. Scolding them or arguing with them has no point other than to scramble the context window.

I do think zoomorphizing them might be useful. These poor little buggers, living as ghosts in the machine, are pretty confused sometimes, but their motives are purely autoregressive.