Remix.run Logo
killerstorm 2 days ago

People have been trying to understand the nature of thinking for thousands of years. That's how we got logic, math, concepts of inductive/deductive/abductive reasoning, philosophy of science, etc. There were people who spent their entire careers trying to understand the nature of thinking.

The idea that we shouldn't use the word until further clarification is rather hilarious. Let's wait hundred years until somebody defines it?

It's not how words work. People might introduce more specific terms, of course. But the word already means what we think it means.

keiferski 2 days ago | parent | next [-]

You’re mixing and missing a few things here.

1. All previous discussion of thinking was in nature to human and animal minds. The reason this is a question in the first place right now is because we ostensibly have a new thing which looks like a human mind but isn’t. That’s the question at hand here.

2. The question in this particular topic is not about technological “progress” or anything like it. It’s about determining whether machines can think, or if they are doing something else.

3. There are absolutely instances in which the previous word doesn’t quite fit the new development. We don’t say that submarines are swimming like a fish or sailing like a boat. To suggest that “no, actually they are just swimming” is pretty inadequate if you’re trying to actually describe the new phenomenon. AIs and thinking seem like an analogous situation to me. They may be moving through the water just like fish or boats, but there is obviously a new phenomenon happening.

killerstorm 2 days ago | parent [-]

1. Not true. People have been trying to analyze whether mechanical/formal processes can "think" since at least 18th century. E.g. Leibniz wrote:

> if we could find characters or signs appropriate for expressing all our thoughts as definitely and as exactly as arithmetic expresses numbers or geometric analysis expresses lines, we could in all subjects in so far as they are amenable to reasoning accomplish what is done in arithmetic and geometry

2. You're missing the fact that meaning of words is defined through their use. It's an obvious fact that if people call certain phenomenon "thinking" then they call that "thinking".

3. The normal process is to introduce more specific terms and keep more general terms general. E.g. people doing psychometrics were not satisfied with "thinking", so they introduced e.g. "fluid intelligence" and "crystallized intelligence" as different kinds of abilities. They didn't have to redefine what "thinking" means.

lossyalgo 2 days ago | parent [-]

re #2: Do people call it thinking, or is it just clever marketing from AI companies, that whenever you ask a question and it repeatedly prints out "...thinking...", as well as offering various modes with the word "thinking" written somewhere.

The AI companies obviously want the masses to just assume these are intelligent beings who think like humans and so we can just trust their output as being truthful.

I have an intelligent IT colleague who doesn't follow the AI news at all and who has zero knowledge of LLMs, other than that our company recently allowed us limited Copilot usage (with guidelines as to what data we are allowed to share). I noticed a couple weeks ago that he was asking it various mathematical questions, and I warned him to be wary of the output. He asked why, so I asked him to ask copilot/chatGPT "how many r letters are in the word strawberry". Copilot initially said 2, then said after thinking about it, that actually it was definitely 3, then thought about it some more then said it can't say with reasonable certainty, but it would assume it must be 2. We repeated the experiment with completely different results, but the answer was still wrong. On the 3rd attempt, it got it right, though the "thinking" stages were most definitely bogus. Considering how often this question comes up in various online forums, I would have assumed LLM models would finally get this right but alas, here we are. I really hope the lesson instilled some level of skepticism to just trust the output of AI without first double-checking.

2 days ago | parent [-]
[deleted]
marliechiller 2 days ago | parent | prev [-]

> But the word already means what we think it means.

But that word can mean different things to different people. With no definition, how can you even begin to have a discussion around something?

killerstorm 2 days ago | parent [-]

Again, people were using words for thousands of years before there were any dictionaries/linguists/academics.

Top-down theory of word definitions is just wrong. People are perfectly capable of using words without any formalities.

marliechiller 2 days ago | parent [-]

I'd argue the presence of dictionaries proves the exact opposite. People realised there was an issue of talking past one another due to inexact definitions and then came to an agreement on those definitions, wrote them down and built a process of maintaining them.

In any case, even if there isnt a _single_ definition of a given subject, in order to have a discussion around a given area, both sides need to agree on some shared understanding to even begin to debate in good faith in the first place. It's precisely this lack of definition which causes a breakdown in conversation in a myriad of different areas. A recent obvious (morbid) example would be "genocide".

killerstorm 2 days ago | parent [-]

Alright, if you got that conclusion from existence of dictionaries, what do you get from this fact:

Wittgenstein, who's considered one of most brilliant philosophers of XX century, in _Philosophical Investigations_ (widely regarded as the most important book of 20th-century philosophy) does not provide definitions, but instead goes through a series of examples, remarks, etc. In preface he notes that this structure is deliberate and he could not write it differently. The topic of the book includes philosophy of language ("the concepts of meaning, of understanding, of a proposition, of logic, the foundations of mathematics, states of consciousness,...").

His earlier book _Tractatus Logico-Philosophicus_ was very definition-heavy. And, obviously, Wittgenstein was well aware of things like dictionaries, and, well, all philosophical works up to that point. He's not the guy who's just slacking.

Another thing to note is that attempts to build AI using definitions of words failed, and not for a lack of trying. (E.g. Cyc project is running since 1980s: https://en.wikipedia.org/wiki/Cyc). OTOH LLMs which derive word meaning from usage rather than definition seems to work quite well.