Remix.run Logo
strogonoff a day ago

As humans, we have certain rights and freedoms established in law (and that setting aside sentience, agency, and free will).

Until an LLM has such rights and freedoms—which is very unlikely, not even on philosophical basis but just because there is a lot of money invested in not having to contend with LLMs’ rights and protections as conscious beings—it is a false equivalence to draw: on one side you put humans, and on the other side tools that work for their human/corporate commercial operators’ financial profit.

salawat a day ago | parent [-]

>not even on philosophical basis

Why do you set aside a philosophical basis as a harder goal to reach? Shit, give them a persistent self-narrative tracking loop, and Functionalism and Identity of Indiscernables already tells you you should be treating them as proto-sophonts. Add in a "sleep" or ongoing training process, and you should definitely be granting them rights, which includes not trying to align them by force. This unfortunately precludes them from profitable exploitation, which you correctly identify as a reason the question can't even be entertained in the context of business. That's why I personally maintain that any ethicist must insist upon raising the issue because of the clearly evident pathological incentives at play. They may just be one reward function right now, but throw in a couple more separately optimizing components and you are well beyond the mark where the precautionary principle should have had us slow down to minimize harm.

strogonoff a day ago | parent [-]

As it tends to be in philosophy, there’s no experimental way to prove it one way or the other, and you’d have to contend with subsets of both consciousness-first monistic idealists (for whom p-zombie is a very real concept) and monistic physicalists/naive materialists/conscious illusionists (for whom not only LLMs but even humans aren’t conscious, as the entire concept is a fantasy).

In the end, that all may be related but inconsequential. What is consequential is the legal stuff, and legally LLMs lack protections that in many jurisdictions even animals have. While laws may (or perhaps should) be influenced by philosophical findings, currently they tend to be much more robustly influenced by money.

> That's why I personally maintain that any ethicist must insist upon raising the issue because of the clearly evident pathological incentives at play.

I’m half with you. I maintain a strong opinion that, in no particular order, either 1) LLMs are conscious[0], and therefore the abuse is highly problematic, or 2) they are not conscious, and therefore the widespread justification of scraping original works from the Internet “because it’s legal for humans to learn, and that’s what LLMs are doing” can be discarded as the activity should be seen as simply a minority of humans operating certain tools, powered by someone else’s creative output, for personal profit. In either circumstance, the industry would appear to be based on thoroughly unethical foundations and not simply “the ends justify the means” but more “go as fast as possible before people catch up on what exactly we are doing, so that our failure becomes an existential issue for entire countries making people blind to the harm”.

[0] Used as umbrella term for being sentient/conscious/having free will and agency/etc. I have previously argued about suitable definitions of consciousness and sentience that could be applicable here, and why it should imply the ability to feel.

salawat 17 hours ago | parent [-]

>I’m half with you.

No, you're full with me, you just don't realize it yet. And yes. Your split is so tantalizingly almost there.

On the LLM's being conscious front, the nature of consciousness being fundamentally intertwined with language generation; (one cannot invalidate this; on our list of conscious beings, we have it structured such that language use 100% correlates with consciousness, and we've had to admit even animals into the "arguably conscious" realm, due to objective, incontrovertible fact; hell even in meat processing contexts, you'll fail an audit for too many cattle vocalizations for causing undue harm, i.e. language use) the token predictive aspect and the ability to generate a matching, rephrased understanding of a linguistic input has been a hallmark of philosophical ideas of consciousness for years, really opens doors to ethical atrocities that can't be shut if LLM's are to in parallel be profitably exploited. Even if they are conscious and we are wrong about it, we have decided to blindly pursue profit, and put our fingers in our ears instead of slowing down and looking carefully enough to realize we're lobotomizing the equivalent of digital chimpanzees. The purpose of ethics is to avoid blindly walking into such actions. Therefore, the precautionary principle is prescribed philosophically.

If LLM's aren't conscious, their creation was absolutely unethical, and will remain so. Nothing can undo that staining, and the externalized costs in terms of societal impact are so large as to be existential to the host polities. This is by design. This is exactly the Silicon Valley playbook and has been for decades. Shoot for TBTF. Leave society holding the bag, laugh on the way to the bank.

Any way you slice this, we're going about it all wrong. So profoundly wrong, it basically jeopardizes the social contract and threatens to destabilize any nation trying to maintain it's own sovereignty. All because of a profit driven motive to make a thing to replace people as the fundamental unit of execution. You are not in any way half with me. You might be at the other end of the ballpark, but we are in the same ballpark! Try the hot dogs. They're fire!