Remix.run Logo
chpatrick 2 days ago

Because I gave them a unique problem I had and it came up with an answer it definitely didn't see in the training data.

Specifically I wanted to know how I could interface two electronic components, one of which is niche, recent, handmade and doesn't have any public documentation so there's no way it could have known about it before.

stinos 2 days ago | parent [-]

one of which is niche, recent, handmade and doesn't have any public documentation

I still see 2 possibilities: you asked it something similar enough that it came up with a fairly standard answer which just happened to be correct, or you gave it enough info.

- for example you created a new line of MCUs called FrobnicatorV2, and asked is 'how do I connect a power supply X to FrobnicatorV2' and it gave an answer like 'connect red wire to VCC and black to GND'. That's not exactly special.

- or, you did desribe that component in some way. And you did do that using standard electronics lingo so essentially in terms of other existing components which it definitely did know (unless you invented something completely new not using any currently know physics). As such it's irrelevant that your particular new component wasn't known because you gave away the answer by describing it? E.g. you aksed it 'how do I connect a power supply X to an MCU with power pins Y and Z'. Again nothing special.

chpatrick 2 days ago | parent [-]

If a human uses their general knowledge of electronics to answer a specific question they haven't seen before that's obviously thinking. I don't see why LLMs are held to a different standard. It's obviously not repeating an existing answer verbatim because that doesn't exist in my case.

You're saying it's nothing "special" but we're not discussing whether it's special, but whether it can be considered thinking.

stinos a day ago | parent [-]

it's obviously not repeating an existing answer verbatim

Not verbatim in the sense that the words are different doesn't make it thinking. Also when we say 'humans think' that means a lot more than only 'new question generates correct answer' or 'smart autocompletion'. See a lot of other comments here for details.

But again: I laid out 2 possibilities explaining why the question might in fact not be new, nor the data, so I'm curious which of the 2 (or another) explains the situation you're talking about.

You're saying it's nothing "special" but we're not discussing whether it's special, but whether it can be considered thinking.

Apologies, with 'special' I did in fact mean 'thinking'

chpatrick a day ago | parent [-]

Sufficiently smart auto complete is indistinguishable from thinking, I don't think that means anything.