Remix.run Logo
raincole 9 hours ago

I know anthropomorphizing LLMs has been normalized, but holy shit. I hope the language in this article is intentionally chosen for a dramatic effect.

pjc50 8 hours ago | parent | next [-]

The thing is .. what else can you do? All the advice on how to get results out of LLMs talks in the same way, as if it's a negotiation or giving a set of instructions to a person.

You can do a mental or physical search and replace all references to the LLM as "it" if you like, but that doesn't change the interaction.

nialse 8 hours ago | parent | prev | next [-]

Agreed. We should not be anthropomorphising LLMs or having them mimic humans.

Animats 8 hours ago | parent [-]

It's inherent in the way LLMs are built, from human-written texts, that they mimic humans. They have to. They're not solving problems from first principles.

nialse 7 hours ago | parent | next [-]

Maybe we should change that? Of course symbolic AI was the holy grail until statistical AI came in and swept the floor. Maybe something else though.

chrisjj an hour ago | parent | prev [-]

They ingest text written in first and third person and regurgitate in first person only, right?

zingar 8 hours ago | parent | prev [-]

Fascinating. This is invisible to me, what anthropomorphising did you notice that stood out?

philipwhiuk 2 hours ago | parent [-]

From the first sentence

> I asked an AI agent to solve a programming problem

You're not asking it to solve anything. You provide a prompt and it does autocomplete. The only reason it doesn't run forever is that one of the generated tokens is interpreted as 'done'.

xeyownt an hour ago | parent | next [-]

What a poor explanation.

With the same reasoning, human being are only a bunch of atoms, and the only reason they don't collide with other humans is because of the atomic force.

When your abstraction level is too low, it doesn't explain anything, because the system that is built on it is way too complex.

chrisjj an hour ago | parent [-]

"Autocomplete" is noy an abstraction level. It is the actual programmed behaviour.

rcxdude 10 minutes ago | parent [-]

At a certain level of abstraction, yes.

SpicyLemonZest an hour ago | parent | prev | next [-]

I just don't think that's correct. When I ask Claude to solve something for me, it takes a number of actions on my computer which are neither writing text nor interpreting the done token. It executes the build, debugs tests, et cetera. Sometimes it spawns mini-mes when it thinks that would be helpful! I think saying this is all "autocomplete" is a category error, like saying that you shouldn't talk about clicking buttons or running programs because it's all just electrically charged silicon under the hood.

an hour ago | parent | prev [-]
[deleted]