Remix.run Logo
ablob 3 days ago

Usually it is the work of the one claiming something to prove it. So if you believe that AI does "think" you are expected to show me that it really does. Claiming it "thinks - prove otherwise" is just bad form and also opens the discussion up for moving the goalposts just as you did with your brain emulation statement. Or you could just not accept any argument made or circumvent it by stating the one trying to disprove your assertion got the definition wrong. There are countless ways to start a bad faith argument using this methodology, hence: Define property -> prove property.

Conversely, if the one asserting something doesn't want to define it there is no useful conversation to be had. (as in: AI doesn't think - I won't tell you what I mean by think)

PS: Asking someone to falsify their own assertion doesn't seem a good strategy here.

PPS: Even if everything about the human brain can be emulated, that does not constitute progress for your argument, since now you'd have to assert that AI emulates the human brain perfectly before it is complete. There is no direct connection between "This AI does not think" to "The human brain can be fully emulated". Also the difference between "does not" and "can not" is big enough here that mangling them together is inappropriate.

CamperBob2 3 days ago | parent | next [-]

So if you believe that AI does "think" you are expected to show me that it really does.

A lot of people seemingly haven't updated their priors after some of the more interesting results published lately, such as the performance of Google's and OpenAI's models at the 2025 Math Olympiad. Would you say that includes yourself?

If so, what do the models still have to do in order to establish that they are capable of all major forms of reasoning, and under what conditions will you accept such proof?

ablob 3 days ago | parent [-]

It definietly includes myself, I don't have the interest to stay updated here.

For that matter I have no opinion on if AI does think or not, I simply don't care. Therefore I also really can't answer your question in what more a model has to do to establish that they are thinking (does being able to use all major forms of reasoning constitute the capability of thought to you?). I can say however, that any such proof would have to be on a case-by-case basis given my current understanding on AI is designed.

Tadpole9181 3 days ago | parent | prev [-]

Then prove to me you are thinking, lest we assume you are a philosophical zombie and need no rights or protections.

Sometimes, because of the consequences of otherwise, the order gets reversed

ablob 3 days ago | parent [-]

Well first of all I never claimed that I was capable of thinking (smirk). We also haven't agreed on a definition of "thinking" yet, so as you can read in my previous comment, there is no meaningful conversation to be had. I also don't understand how your oddly aggresive phrasing adds to the conversation, but if it helps you: my rights and protections do not depend on whether I'm able to prove to you that I am thinking. (It also derails the conversation for what it's worth - it's a good strategy in the debating club, but these are about winning or loosing and not about fostering and obtaining knowledge)

Whatever you meant to say with "Sometimes, because of the consequences of otherwise, the order gets reversed" eludes me as well.

Tadpole9181 3 days ago | parent [-]

If I say I'm innocent, you don't say I have to prove it. Some facts are presumed to be true without burden of evidence because otherwise it could cause great harm.

So we don't require, say, minorities or animals to prove they have souls, we just inherently assume they do and make laws around protecting them.

ablob 3 days ago | parent [-]

Thank you for the clarification. If you expect me to justify an action depending on you being innocent, then I actually do need you to prove it. I wouldn't let you sleep in my room assuming you're innocent - or in your words: because of the consequences of otherwise. It feels like you're moving the goalposts here: I don't want to justify an action based on something, i just want to know if something has a specific property.

With regards to the topic: Does AI think? I don't know, but I also don't want to act upon knowing if it does (or doesn't for that matter). In other words, I don't care. The answer could go either way, but I'd rather say that I don't know (especially since "thinking" is not defined). That means that I can assume both and consider the consequences using some heuristic to decide which assumption is better given the action I want to justify doing or not doing. If you want me to believe an AI thinks, you have to prove it, if you want to justify an action you may assume whatever you deem most likely. And if you want to know if an AI thinks, then you literally can't assume it does; simple as that.