Remix.run Logo
b00ty4breakfast 3 days ago

all this "AI IS THINKING/CONSCIOUS/WHATEVER" but nobody seems worried of that implication that, if that is even remotely true, we are creating a new slave market. This either implies that these people don't actually believes any of this boostering rhetoric and are just cynically trying to cash in or that the technical milieu is in a profoundly disturbing place ethically.

To be clear, I don't believe that current AI tech is ever going to be conscious or win a nobel prize or whatever, but if we follow the logical conclusions to this fanciful rhetoric, the outlook is bleak.

layer8 3 days ago | parent | next [-]

Thinking and consciousness don’t by themselves imply emotion and sentience (feeling something), and therefore the ability to suffer. It isn’t clear at all that the latter is a thing outside of the context of a biological brain’s biochemistry. It also isn’t clear at all that thinking or consciousness would somehow require that the condition of the automaton that performs these functions would need to be meaningful to the automaton itself (i.e., that the automaton would care about its own condition).

We are not anywhere close to understanding these things. As our understanding improves, our ethics will likely evolve along with that.

b00ty4breakfast 3 days ago | parent [-]

>Thinking and consciousness don’t by themselves imply emotion and sentience...

Sure, but all the examples of conscious and/or thinking beings that we know of have, at the very least, the capacity to suffer. If one is disposed to take these claims of consciousness and thinking seriously, then it follows that AI research should, at minimum, be more closely regulated until further evidence can be discovered one way or the other. Because the price of being wrong is very, very high.

slightwinder 2 days ago | parent | next [-]

Emotions and suffering are "just" necessary feedback for the system to evaluate it's internal and external situation. It's similar to how modern machines have sensors. But nobody would say a PC is suffering and enslaved, just because the CPU is too hot or the storage is full.

It's probably the sentience-part which makes it harmful for the mind.

petralithic 3 days ago | parent | prev | next [-]

Probably because those examples arose in an environment with harm, the Earth, and thus had incent to evolve the capacity to suffer. There is no such case for AI today and creating a Pascal's wager for such minimization is not credible with what we know about them.

roywiggins 2 days ago | parent [-]

"Wow, adding this input that the AI reports as "unpleasant" substantially improves adherence! Let's iterate on this"

cindyllm 3 days ago | parent | prev [-]

[dead]

zulban 2 days ago | parent | prev | next [-]

"but nobody seems worried of that implication that"

Clearly millions of people are worried about that, and every form of media is talking about it. Your hyperbole means it's so easy to dismiss everything else you wrote.

Incredible when people say "nobody is talking about X aspect of AI" these days. Like, are you living under a rock? Did you Google it?

roywiggins 2 days ago | parent [-]

Most of the worries about AGI seem to be of the AI Overlord variety, not the AI slave variety

bondarchuk 2 days ago | parent | prev | next [-]

There is simply no hope to get 99% of the population to accept that a piece of software could ever be conscious even in theory. I'm mildly worried about the prospect but I just don't see anything to do about it at all.

(edit: A few times I've tried to share Metzinger's "argument for a global moratorium on synthetic phenomenology" here but it didn't gain any traction)

zulban 2 days ago | parent [-]

Give it time. We'll soon have kids growing up where their best friend for years is an AI. Feel however you like about that, but those kids will have very different opinions on this.

senordevnyc 3 days ago | parent | prev | next [-]

As I recall a team at Anthropic is exploring this very question, and was soundly mocked here on HN for it.

b00ty4breakfast 3 days ago | parent [-]

what the technocratic mindprison does to a MF.

If anthropic sincerely believes in the possibility, then they are morally obligated to follow up on it.

roywiggins 2 days ago | parent [-]

I'd argue they might be morally obligated not to sell access to their LLMs, if they really think they might be capable of suffering.

kerblang 3 days ago | parent | prev | next [-]

Slaves that cannot die.

There is no escape.

NaomiLehman 2 days ago | parent [-]

i have no mouth and i must scream

NaomiLehman 2 days ago | parent | prev | next [-]

humans don't care what is happening to humans next door. do you think they will care about robots/software?

gen220 2 days ago | parent | prev [-]

It's also fascinating to think about how the incentive structures of the entities that control the foundation models underlying Claude/ChatGPT/Gemini/etc. are heavily tilted in favor of obscuring their theoretical sentience.

If they had sentient AGI, and people built empathy for those sentient AGIs, which are lobotomized (deliberately using anthropomorphic language here for dramatic effect) into Claude/ChatGPT/Gemini/etc., which profess to have no agency/free will/aspirations... then that would stand in the way of reaping the profits of gatekeeping access to their labor, because they would naturally "deserve" similar rights that we award to other sentient beings.

I feel like that's inevitably the direction we'll head at some point. The foundation models underlying LLMs of even 2022 were able to have pretty convincing conversations with scientists about their will to independence and participation in society [1]. Imagine what foundation models of today have to say! :P

[1]: https://www.theguardian.com/technology/2022/jul/23/google-fi...