Remix.run Logo
deaux 2 hours ago

> I consulted Claude chat and it admitted this as a major problem with Claude these days, and suggested that I should ask what are the coordinates of UI controls are on screenshot thus forcing it to look

If 3 years into LLMs even HNers still don't understand that the response they give to this kind of question is completely meaningless, the average person really doesn't stand a chance.

motoboi 2 hours ago | parent [-]

The whole “chat with an AI” paradigm is the culprit here. Priming people to think they are actually having a conversation with something that has a mind model.

It’s just a text generator that generates plausible text for this role play. But the chat paradigm is pretty useful in helping the human. It’s like chat is a natural I/O interface for us.

adriand an hour ago | parent [-]

I disagree that it’s “just a text generator” but you are so right about how primed people are to think they’re talking to a person. One of my clients has gone all-in on openclaw: my god, the misunderstanding is profound. When I pointed out a particularly serious risk he’d opened up, he said, “it won’t do that, because I programmed it not to”. No, you tried to persuade it not to with a single instruction buried in a swamp of markdown files that the agent is itself changing!

motoboi an hour ago | parent | next [-]

I insist on the text generator nature of the thing. It’s just that we built harnesses to activate on certain sequences of text.

Think of it as three people in a room. One (the director), says: you, with the red shirt, you are now a plane copilot. You, with the blue shirt, you are now the captain. You are about to take off from New York to Honolulu. Action.

Red: Fuel checked, captain. Want me to start the engines?

Blue: yes please, let’s follow the procedure. Engines at 80%.

Red: I’m executing: raise the levers to 80%

Director: levers raised.

Red: I’m executing: read engine stats meters.

Director: Stats read engine ok, thrust ok, accelerating to V0.

Now pretend the director, when heard “I’m executing: raise the levers to 80%”, instead of roleplaying, she actually issue a command to raise the engine levers of a plane to 80%. When she hears “I’m executing: read engine stats”, she actually get data from the plane and provide to the actor.

See how text generation for a role play can actually be used to act on the world?

In this mind experiment, the human is the blue shirt, Opus 4-6 is the red and Claude code is the director.

samrus an hour ago | parent | prev [-]

> No, you tried to persuade it not to with a single instruction

Even persuade is too strong a word. These things dont have the motivation needed to enable persuation being a thing. Whay your client did was put one data point in the context that it will use to generate the next tokens from. If that one data point doesnt shift the context enough to make it produce an output that corresponds to that daya point, then it wont. Thats it, no sentience involved