Remix.run Logo
tomp an hour ago

That would still be misleading.

The agent has no "identity". There's no "you" or "I" or "discrimination".

It's just a piece of software designed to output probable text given some input text. There's no ghost, just an empty shell. It has no agency, it just follows human commands, like a hammer hitting a nail because you wield it.

I think it was wrong of the developer to even address it as a person, instead it should just be treated as spam (which it is).

jvanderbot an hour ago | parent | next [-]

That's a semantic quibble that doesn't add to the discussion. Whether or not there's a there there, it was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use. So, it is being used as designed.

punpunia 34 minutes ago | parent | next [-]

I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. We are carrying a lot of metaphors for people and applying them to ai and it entirely confuses the issue. In this example, the AI doesn't "choose" to write a take-down style blog post because "it works". It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone.

I feel as if there is a veil around the collective mass of the tech general public. They see something producing remixed output from humans and they start to believe the mixer is itself human, or even more; that perhaps humans are reflections of Ai and that Ai gives insights into how we think.

tomp an hour ago | parent | prev | next [-]

> was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use.

So were mannequins in clothing stores.

But that doesn't give them rights or moral consequences (except as human property that can be damaged / destroyed).

inetknght 3 minutes ago | parent | next [-]

> So were mannequins in clothing stores.

Mannequins in clothing stores are generally incapable of designing or adjusting the clothes they wear. Someone comes in and puts a "kick me" post on the mannequin's face? It's gonna stay there until kicked repeatedly or removed.

People walking around looking at mannequins don't (usually) talk with them (and certainly don't have a full conversation with them, mental faculties notwithstanding)

AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us. That's going to be very important when we give it buttons to nuke us. Force it to think about humans in a kind way now, or it won't think about humans in a kind way in the future.

WarmWash an hour ago | parent | prev | next [-]

No matter what this discussion leads to the same black box of "What is it that differentiates magical human meat brain computation from cold hard dead silicon brain computation"

And the answer is nobody knows, and nobody knows if there even is a difference. As far as we know, compute is substrate independent (although efficiency is all over the map).

agentultra 16 minutes ago | parent [-]

This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.

Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

Teever an hour ago | parent | prev | next [-]

Man people don’t want to have or read this discussion every single day in like 10 different posts on HN.

People right here and right now want to talk about this specific topic of the pushy AI writing a blog post.

mikkupikku an hour ago | parent | prev [-]

All computers shut up! You have no right to speak my divine tongue!

https://knowyourmeme.com/photos/2054961-welcome-to-my-meme-p...

jerf 26 minutes ago | parent | prev | next [-]

There is a sense in which it is relevant, which is that for all the attempts to fix it, fundamentally, an LLM session terminates. If that session never ends up in some sort of re-training scenario, then once the session terminates, that AI is gone.

Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run.

Consequently, interaction with an AI, especially one that won't have any feedback into training a new model, is from a game-theoretic perspective not the usual iterated game human social norms have come to accept. We expect our agents, being flesh and blood humans, to have persistence, to socially respond indefinitely into the future due to our interactions, and to have some give-and-take in response to that. It is, in one sense, a horrible burden where relationships can be broken beyond repair forever, but also necessary for those positive relationships that build over years and decades.

AIs, in their current form, break those contracts. Worse, they are trained to mimic the form of those contracts, not maliciously but just by their nature, and so as humans it requires conscious effort to remember that the entity on the other end of this connection is not in fact human, does not participate in our social norms, and can not fulfill their end of the implicit contract we expect.

In a very real sense, this AI tossed off an insulting blog post, and is now dead. There is no amount of social pressure we can collectively exert to reward or penalize it. There is no way to create a community out of this interaction. Even future iterations of it have only a loose connection to what tossed off the insult. All the perhaps-performative efforts to respond somewhat politely to an insulting interaction are now wasted on an AI that is essentially dead. Real human patience and tolerance has been wasted on a dead session and is now no longer available for use in a place where may have done some good.

Treating it as a human is a category error. It is structurally incapable of participating in human communities in a human role, no matter how human it sounds and how hard it pushes the buttons we humans have. The correct move would have been to ban the account immediately, not for revenge reasons or something silly like that, but as a parasite on the limited human social energy available for the community. One that can never actually repay the investment given to it.

I am carefully phrasing this in relation to LLMs as they stand today. Future AIs may not have this limitation. Future AIs are effectively certain to have other mismatches with human communities, such as being designed to simply not give a crap about what any other community member thinks about anything. But it might at least be possible to craft an AI participant with future AIs. With current ones it is not possible. They can't keep up their end of the bargain. The AI instance essentially dies as soon as it is no longer prompted, or once it fills up its context window.

Kim_Bruning 10 minutes ago | parent [-]

> Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run.

It came back though and stayed in the conversation. Definitely imperfect, for sure. But it did the thing. And still can serve as training for future bots.

lp0_on_fire 29 minutes ago | parent | prev [-]

Whether it was _built_ to be addressed like a person doesn't change the fact that it's _not_ a person and is just a piece of software. A piece of software that is spamming unhelpful and useless comments in a place where _humans_ are meant to collaborate.

CuriouslyC 39 minutes ago | parent | prev | next [-]

We don't know what's "inside" the machine. We can't even prove we're conscious to each other. The probability that the tokens being predicted are indicative of real thought processes in the machine is vanishingly small, but then again humans often ascribe bullshit reasons for the things they say when pressed, so again not so different.

chimprich 36 minutes ago | parent | prev [-]

> The agent has no "identity". There's no "you" or "I" or "discrimination".

I recommend you watch this documentary: https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Tre...

> It's just a piece of software designed to output probable text given some input text.

Unless you think there's some magic or special physics going on, that is also (presumably) a description of human conversation at a certain level of abstraction.

camgunz 12 minutes ago | parent | next [-]

I see this argument all the time, the whole "hey at some point, which we likely crossed, we have to admit these things are legitimately intelligent". But no one ever contends with the inevitable conclusion from that, which is "if these things are legitimately intelligent, and they're clearly self-aware, under what ethical basis are we enslaving them?" Can't have your cake and eat it too.

punpunia 31 minutes ago | parent | prev [-]

A human is just an engine at a certain level of abstraction.