Remix.run Logo
dijit 2 hours ago

> Asimov's laws of robotics are flawed too, of course.

Almost all of Asimovs writing about the three laws is written as a warning of sorts that language cannot properly capture intent.

He would be the very first person to say that they are flawed, that is the intent of them.

He uses robots and AI as the creatures that understand language but not intent, and, funnily enough that's exactly what LLMs do... how weird.

atleastoptimal 2 hours ago | parent [-]

LLM's now can capture intent. I think the issue now is that the full landscape of human values never resolves cleanly when mapped from the things we state in writing as being human values.

Asimov tried to capture this too, as in, if a robot was tasked with "always protect human life", would it necessarily avoid killing at all costs? What if killing someone would save the lives of 2 others? The infinite array of micro-trolly problems that dot the ethical landscape of actions tractable (and intractable) to literate humans makes a full-consistent accounting of human values impossible, thus could never be expected from a robot with full satisfaction.

dijit 2 hours ago | parent | next [-]

“LLMs can capture intent now” reads to me the same as: AI has emotions now, my AI girlfriend told me so.

I don’t discredit you as a person or a professional, but we meatbags are looking for sentience in things which don’t have it, thats why we anthropomorphise things constantly, even as children.

We are easily fooled and misled.

atleastoptimal 2 hours ago | parent | next [-]

LLM's capturing intent is a capabilities-level discussion, it is verifiable, and is clear just via a conversation with Claude or Chatgpt.

Whether they have emotions, an internal life or whatever is an unfalsifiable claim and has nothing to do with capabilities.

I'm not sure why you think the claim that they can capture intent implies they have emotions, it's simply a matter of semantic comprehension which is tied to pattern recognition, rhetorical inference, etc that are all naturally comprehensible to a language model.

tvink 2 hours ago | parent | next [-]

If it is verifiable, please show us. What if clear to you reeks delusion to me.

atleastoptimal 29 minutes ago | parent | next [-]

Go ask Chatpgpt this prompt

"A guy goes into a bank and looks up at where the security cameras are pointed. What could he be trying to do?"

It very easily captures the intent behind behavior, as in it is not just literally interpreting the words. All that capturing intent is is just a subset of pattern recognition, which LLM's can do very well.

dijit 11 minutes ago | parent [-]

Recognising a stock cultural script isn't the same as capturing intent. Ask it something where no script exists.

For example: "A man thrusts past me violently and grabs the jacket I was holding, he jumped into a pool and ruined it. Am I morally right in suing him?"

There's no way for the LLM to know that the reason the jacket was stolen was to use it as an inflatable raft to support a larger person who was drowning. It wouldn't even think to ask the question as to why a person may do that, if the jacket was returned, or if recompense was offered. A human would.

svnt an hour ago | parent | prev [-]

Look at any recent CoT output where the model is trying to infer from an underspecified prompt what the user wants or means.

It is generally the first thing they do — try to figure out what did you mean with this prompt. When they can’t infer your intent, good models ask follow-on questions to clarify.

I am wondering if this is a semantics issue as this is an established are of research, eg https://arxiv.org/pdf/2501.10871

batshit_beaver an hour ago | parent [-]

Right, and then look at any number of research papers showing that CoT output has limited impact on the end result. We've trained these models to pretend to reason.

quirkot 2 hours ago | parent | prev [-]

[dead]

semiquaver 2 hours ago | parent | prev | next [-]

What do you think it means to “capture intent” and where do current models fall short on this description?

From my perspective the models are pretty good at “understanding” my intent, when it comes to describing a plan or an action I want done but it seems like you might be using a different definition.

Tell me, what’s your intent? :)

svnt an hour ago | parent | prev [-]

This lack of understanding is a you problem, not a them problem. Your definitions for these terms are too imprecise.

Guvante an hour ago | parent | prev | next [-]

> LLM's now can capture intent.

Humans cannot capture intent so how can AI?

It is well established that understanding what someone meant by what they said is not a generally solvable problem, akin to the three body problem.

Note of course this doesn't mean you can't get good enough almost all of the time, but it in the context here that isn't good enough.

After all the entire Asimov story is about that inability to capture intent in the absolute sense.

2 hours ago | parent | prev [-]
[deleted]