Remix.run Logo
stavros 4 days ago

I want to point out here that people do the same: a lot of the time we don't know why we thought or did something, but we'll confabulate plausible-sounding rhetoric after the fact.

LoganDark 3 days ago | parent | next [-]

The split-brain experiment is one of my favorites! https://www.youtube.com/watch?v=wfYbgdo8e-8

btbuildem 3 days ago | parent [-]

https://en.wikipedia.org/wiki/Peace_on_Earth_(novel)

sinuhe69 3 days ago | parent | prev | next [-]

Not in math.

TeMPOraL 3 days ago | parent | next [-]

Yes in math. Formalisms come after casual thoughts, at every step.

mdp2021 3 days ago | parent | next [-]

It's totally different: those formalisms are in a workbench, following a set of rules that either work or not.

So, yes, that (math) is representative of the actual process: pattern recognition gives you spontaneous ideas, that you assess for truthfulness in conscious acts of verification.

sinuhe69 3 days ago | parent | prev | next [-]

What is a casual thought that you cannot explain in math?

TeMPOraL 3 days ago | parent [-]

That question makes no sense. You can explain anything in math, because math is a language and lets you define whatever terms and axioms you need at a given moment.

(Whether or not such explanation is useful for anything is another issue entirely.)

worldsayshi 3 days ago | parent [-]

Can you explain how intuition led you to try a certain approach?

TeMPOraL 3 days ago | parent [-]

Is it enough if I hand-wave it with probability distributions, or do you want me to write out adjacency search in a high-dimensional space?

legel 3 days ago | parent | prev [-]

Math comes from brains.

HeavyStorm a day ago | parent | prev [-]

That's some misunderstanding of the human brain and thought process...

mdp2021 3 days ago | parent | prev [-]

/Some/ people bullshit themselves stating the plausible; others check their hypotheses.

The difference is total in both humans and automated processes.

catskul2 3 days ago | parent | next [-]

Everyone, every last one of us, does this every single day, all day, and only occasionally do we deviate to check ourselves, and often then it's to save face.

A Nobel prize was given for related research to Daniel Kahneman.

If you think it doesn't apply to you, you're definitely wrong.

mdp2021 3 days ago | parent | next [-]

> occasionally

Properly educated people do it regularly, not occasionally. You are describing a definite set of people. No, it does not cover all.

Some people will output a pre-given answer; some people check.

Edit: sniper... Find some argument.

og_kalu 3 days ago | parent [-]

Your decisions shape your preferences just as much as your preferences shape your decisions and you're not even aware of it. Yes, everybody regularly confabulates plausible sounding things that they themselves genuinely believe to be the 'real reason'. You're not immune or special.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3196841/

mdp2021 2 days ago | parent [-]

I will check the article with more attention as soon as I will have the time, but: putting aside a question on how would a similar investigation prove that all people would function in the same way,

that does not seem to counter that some people «check their hypotheses» - as duly. Some people do exercise critical thinking. It is an intentional process.

og_kalu 2 days ago | parent [-]

You're not getting it.

You ask A "Why did you choose that?" > He answers "I like the color blue"

This makes sense. This is what everyone thinks and believes is the actual sequence of such events.

But often, this is the actual sequence "Let's go with this" > "Now i like the color blue"

'A' didn't lie to you or try to trick you. He didn't consciously rationalize liking blue after the fact. He's not stupid or "prone to bad thinking". Altering your perceptions of events without your conscious awareness is just simply something that your brain does fairly regularly.

Make no mistake. A genuinely likes blue now - the only difference is that he genuinely believes he made the choice because he liked blue instead of the brain having the tendency to make you favor your choices and giving him the like of blue so it sits better.

This is not something you "check your hypotheses" out of. And it's something every human deals with everyday, including you.

mdp2021 2 days ago | parent [-]

I get what you are pointing at: you are focusing with some strictness on the post from Stavros, which states that "people pseudo-rationalize with plausible explanatory theories their not-at-the-time-rational behaviour".

But I was instead focusing at the general problem in the root post from Foundry27, and to a loose interpretation of the post from Stavros: the opposition between the faculty of generating convincing fantasies vs the faculty of critical thinking. (Such focus being there because more general and pressing in current AI than the contextual problem of "explanation", which is sort of a "perversion" when compared to the same in classical AI, where the steps are recorded procedurally owing to transparency, instead of the paradox of asking an obscure unreliable engine "what it did".)

What I meant is that a general scheme of bullshitting to oneself and pseudo-rationalizing it is not the only way. Please see the other sub-branch in which we talked about mathematics. In important cases, the fantasies are then consciously checked as thoroughly as constraints allow.

So I stated «/Some/ people bullshit themselves stating the plausible; others check their hypotheses ... Some people will output a pre-given answer; some people check» - as a crucial discriminator in the natural and artificial. Please note that the trend in the past two years has generated a believe in some that the at most preliminary part is all that there is.

Also note that Katskul wrote «only occasionally do we deviate to check ourselves» - so the reply is "No: the more one is educated and intellectually trained, the more one's thoughts are vetted - the thought process is disciplined to check its objects".

But I see re-checking the branch that the post from Stavros was strongly specific towards the "smaller" area of "pseudo-rationalizing", so I see why my posts may have looked odd-fitting.

mdp2021 3 days ago | parent | prev [-]

By the way: I have seldom come across a post so weak.

> every last one of us

And how do you prove it.

> A Nobel prize was given

So?

> If you think, you

Prove it.

Support it, at least. That is not discussion.

stavros 3 days ago | parent | prev [-]

How are you going to check your hypotheses for why you preferred that jacket to that other jacket?

mdp2021 3 days ago | parent | next [-]

Do not lose the original point: some systems have a goal to sound plausible, while some have a goal to say the truth. Some systems, when asked "where have you been", will reply "at the baker's" because it is a nice narrative in their "novel writing, re-writing of reality", some other will check memory and say "at the butcher's", where they have actually been.

When people invent explicit reasons on why they turned left or right, those reasons remain hypotheses. The clumsy will promote those hypotheses to beliefs. The apt will keep the spontaneous ideas as hypotheses, until the ability to assess them comes.

og_kalu 3 days ago | parent [-]

Everybody promotes these sorts of hypotheses to beliefs because it's not a conscious decision you are aware of. It's not about being clumsy or apt. You don't have much control over it.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3196841/

https://pure.uva.nl/ws/files/25987577/Split_Brain.pdf

mdp2021 2 days ago | parent [-]

It does not matter, that there may be a tendency towards bad thinking: what matters is the possibility of proper thinking and the training towards it (becoming more and more proficient at it and practicing it constantly, having it as your natural state; in automation, implementing it in the process).

What you control is the intentional revision of thought.

(I am acquainted with earlier studies about the corpus callosum but I do not know why you would mention that, what it would prove: maybe you could be clearer? I do not see how it could affect the notion of critical thinking.)

og_kalu 2 days ago | parent [-]

I've explained it the best i can in the other comment. But you keep making the mistake that this is just a culprit of 'bad thinking' or 'intentional revision of thought' and while i'm not saying those things don't exist, It's not.

Not only are the rationalizations i'm talking about and which some of these papers allude to not intentional, they often happen without your conscious awareness.

mdp2021 2 days ago | parent [-]

On my having come with percussions at the strings meeting see the other reply.

I want to check the papers you proposed as soon as I will have the time: I find it difficult to believe that the conscious cannot intercept those "changes of mind" and correct them.

But please note: you are writing «Not only are ... not intentional»... Immature thought needs not to be intentional at all: it is largely spontaneous thought. But whether part of an intentional process ("let us ponder towards some goal"), or whether part of the subterranean functions, when it becomes visible (or «intercepted» as I wrote above), the trained mind looks at it with diffidence and asks questions about its foundations - intentionally, in the conscious, as a learnt process.

3 days ago | parent | prev | next [-]
[deleted]
DSingularity 3 days ago | parent | prev [-]

Is that example representative for the LLM tasks for which we seek explainability ?

stavros 3 days ago | parent [-]

Are we holding LLMs to a higher standard than people?

f_devd 3 days ago | parent [-]

Ideally yes, LLMs are tools that we expect to work, people are inherently fallible and (even unintentionally) deceptive. LLMs being human-like in this specific way is not desirable.

stavros 3 days ago | parent [-]

Then I think you'll be very disappointed. LLMs aren't in the same category as calculators, for example.

f_devd 3 days ago | parent [-]

I have no illusions on LLMs, I have been working with them since og BERT, always with these same issues and more. I'm just stating what would be needed in the future to make them reliably useful outside of creative writing & (human-guided & checked) search.

If an LLM provides an incorrect/orthogonal rhetoric without a way to reliably fix/debug it it's just not as useful as it theoretically could be given the data contained in the parameters.