Remix.run Logo
jjcm 3 days ago

So much of the debate of whether AI can think or not reminds me of this scene from The Next Generation: https://youtu.be/ol2WP0hc0NY

LLMs hit two out of the three criteria already - self awareness and intelligence, but we're in a similar state where defining consciousness is such a blurry metric. I feel like it wont be a binary thing, it'll be a group decision by humanity. I think it will happen in the next decade or two, and regardless of the outcome I'm excited I'll be alive to see it. It'll be such a monumentous achievement by humanity. It will drastically change our perspective on who we are and what our role is in the universe, especially if this new life form surpasses us.

blacksmith_tb 3 days ago | parent [-]

Self-awareness is a bold claim, as opposed to the illusion of it. LLMs are very good at responding in a way that suggests there's a self, but I am skeptical that proves much about whether they actually have interior states analogous to what we recognize in humans as selfhood...

aaroninsf 3 days ago | parent | next [-]

_Interior states_ gets into some very murky philosophy of mind very quickly of course.

If you're a non-dualist (like me) concerns about qualia start to shade into the religious/metaphysical thereby becoming not so interesting except to e.g. moral philosophy.

Personally I have a long bet that when natively-multimodal models on the scale of contemporary LLM are widely deployed, their "computation phenomenology" will move the goalposts so far the cultural debate will shift from "they are just parrots?" to the moral crisis of abusing parrots, meaning, these systems will increasingly be understood as having a selfhood with moral value. Non-vegetarians may be no more concerned about the quality of "life" and conditions of such systems than they are about factory farming, but, the question at least will circulate.

Prediction: by the time my kids finish college, assuming it is still a thing, it will be as common to see enthusiastic groups flyering and doing sit-ins etc on behalf of AIs as it is today to see animal rights groups.

ACCount37 3 days ago | parent | prev [-]

In the purely mechanical sense: LLMs get less self-awareness than humans, but not zero.

It's amazing how much of it they have, really - given that base models aren't encouraged to develop it at all. And yet, post-training doesn't create an LLM's personality from nothing - it reuses what's already there. Even things like metaknowledge, flawed and limited as it is in LLMs, have to trace their origins to the base model somehow.