Remix.run Logo
Dylan16807 6 days ago

I'll try to keep this simple.

> I'm not disagreeing with you. You understand that, right?

We disagree about whether context can make a difference, right?

> The parent was talking about stringing together inferences. My argument was how you string them together matters. That's all. I said "context matters."

> TLDR: We can't determine if likelihood increases or decreases without additional context

The situations you describe where inference acts differently do not fall under the "stringing together"/"chaining" they were originally talking about. Context never makes their original statement untrue. Chaining always makes evidence weaker.

To be extra clear, it's not about whether the evidence pushes your result number up or down, it's that the likelihood of the evidence itself being correct drops.

> It is the act of chaining together functions.

They were not talking about whether something is composition or not. When they said "string" and "chain" they were talking about a sequence of inferences where each one leads to the next one.

Composition can be used in a wide variety of contexts. You need context to know if composition weakens or strengthens arguments. But you do not need context to know if stringing/chaining weakens or strengthens.

> No, you're being too strict in your definition of "chain".

No, you're being way too loose.

> This tells me you drew your chain wrong. If multiple things are each contributing to D independently then that is not A->B->C->D

??? Of course those are different. That's why I wrote "as opposed to".

> I also gave an example for the other case. So why focus on one of these and ignore the other?

I'm focused on the one you called a "counter example" because I'm arguing it's not an example.

If you specifically want me to address "If these are being multiplied, then yes, this is going to decreases as xy < x and xy < y for every x,y < 1." then yes that's correct. I never doubted your math, and everyone agrees about that one.

TL;DR:

At this point I'm mostly sure we're only disagreeing about the definition of stringing/chaining? If yes, oops sorry I didn't mean to argue so much about definitions. If not, then can you give me an example of something I would call a chain where adding a step increases the probability the evidence is correct?

And I have no idea why you're talking about LLMs.

godelski 6 days ago | parent [-]

  > I'm mostly sure we're only disagreeing about the definition of stringing/chaining? 
Correct.

  > No, you're being way too loose.
Okay, instead of just making claims and for me to trust you, go point to something concrete. I've even tried to google, but despite my years of study in statistics, metric theory, and even mathematical logic I'm at a loss in finding your definition.

I'm aware of the Chain Rule of Probability, but this isn't the only type you will find the term "chain" in statistics. Hell, the calculus Chain Rule is still used there too! So forgive me for being flustered but you are literally arguing to me that a Markov Chain isn't a chain. Maybe I'm having a stroke, but I'm pretty sure the word "chain" is in Markov Chain.

Dylan16807 6 days ago | parent [-]

> Okay, instead of just making claims and for me to trust you, go point to something concrete. I've even tried to google, but despite my years of study in statistics, metric theory, and even mathematical logic I'm at a loss in finding your definition.

Let's look again at what we're talking about:

>>> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.

>> As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error.

> I saw an article recently that talked about stringing likely inferences together but ending up with an unreliable outcome because enough 0.9 probabilities one after the other lead to an unlikely conclusion.

> Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".

The only term in there you could google is "tolerance stackup". The rest is people making ad-hoc descriptions of things. Except for "Chain of reasoning fallacy" which is a fake term. So I'm not surprised you didn't find anything in google, and I can't provide you anything from google. There is nothing "concrete" to ask for when it comes to some guy's ad-hoc description, you just have to read it and do your best.

And everything I said was referring back to those posts, primarily the last one by robocat. I was not introducing anything new when I used the terms "string" and "chain". I was not referring to any scientific definitions. I was only talking about the concept described by those three posts.

Looking back at those posts, I will confidently state that the concept they were talking about does not include markov chains. You're not having a stroke, it's just a coincidence that the word "chain" can be used to mean multiple things.

godelski 6 days ago | parent [-]

I googled YOUR terms. And if you read my messages you'd notice that I'm not a novice when it comes to math. Hell, you should have gotten that from my very first comment. I was never questioning if I had a stroke, I was questioning your literacy.

  > I was not referring to any scientific definitions.
Yet, you confidently argued against ones that were stated.

If you're going to speak out your ass, at least have the decency to let everyone know first.

Dylan16807 6 days ago | parent [-]

They were never my terms. They were the terms from the people that were having a nice conversation before you interrupted.

You told them they were wrong, that it could go either way.

That's not true.

What they were talking about cannot go either way.

You were never talking about the same thing as them. I gave you the benefit of the doubt by thinking you were trying to talk about the same thing as them. Apparently I shouldn't have.

You can't win this on definitions. They were talking about a thing without using formal definitions, and you replied to them with your own unrelated talk, as if it was what they meant. No. You don't get to change what they meant.

That's why I argued against your definition. Your definition is lovely in some other conversation. Your definition is not what they meant, and cannot override what they meant.