Remix.run Logo
modernpacifist 5 hours ago

A very complicated pattern matching engine providing an answer based on it's inputs, heuristics and previous training.

margalabargala 5 hours ago | parent | next [-]

Great. So if that pattern matching engine matches the pattern of "oh, I really want A, but saying so will elicit a negative reaction, so I emit B instead because that will help make A come about" what should we call that?

We can handwave defining "deception" as "being done intentionally" and carefully carve our way around so that LLMs cannot possibly do what we've defined "deception" to be, but now we need a word to describe what LLMs do do when they pattern match as above.

surgical_fire 4 hours ago | parent [-]

The pattern matching engine does not want anything.

If the training data gives incentives for the engine to generate outputs that reduce negative reaction by sentiment analysis, this may generate contradictions to existing tokens.

"Want" requires intention and desire. Pattern matching engines have none.

jazzyjackson 4 hours ago | parent | next [-]

I wish (/desire) a way to dispel this notion that the robots are self aware. It’s seriously digging into popular culture much faster than “the machine produced output that makes it appear self aware”

Some kind of national curriculum for machine literacy, I guess mind literacy really. What was just a few years ago a trifling hobby of philosophizing is now the root of how people feel about regulating the use of computers.

margalabargala 4 hours ago | parent [-]

The issue is that one group of people are describing observed behavior, and want to discuss that behavior, using language that is familiar and easily understandable.

Then a second group of people come in and derail the conversation by saying "actually, because the output only appears self aware, you're not allowed to use those words to describe what it does. Words that are valid don't exist, so you must instead verbosely hedge everything you say or else I will loudly prevent the conversation from continuing".

This leads to conversations like the one I'm having, where I described the pattern matcher matching a pattern, and the Group 2 person was so eager to point out that "want" isn't a word that's Allowed, that they totally missed the fact that the usage wasn't actually one that implied the LLM wanted anything.

jazzyjackson 2 hours ago | parent [-]

Thanks for your perspective, I agree it counts as derailment, we only do it out of frustration. "Words that are valid don't exist" isn't my viewpoint, more like "Words that are useful can be misleading, and I hope we're all talking about the same thing"

margalabargala 4 hours ago | parent | prev | next [-]

You misread.

I didn't say the pattern matching engine wanted anything.

I said the pattern matching engine matched the pattern of wanting something.

To an observer the distinction is indistinguishable and irrelevant, but the purpose is to discuss the actual problem without pedants saying "actually the LLM can't want anything".

surgical_fire 4 hours ago | parent [-]

> To an observer the distinction is indistinguishable and irrelevant

Absolutely not. I expect more critical thought in a forum full of technical people when discussing technical subjects.

margalabargala 4 hours ago | parent [-]

I agree, which is why it's disappointing that you were so eager to point out that "The LLM cannot want" that you completely missed how I did not claim that the LLM wanted.

The original comment had the exact verbose hedging you are asking for when discussing technical subjects. Clearly this is not sufficient to prevent people from jumping in with an "Ackshually" instead of reading the words in front of their face.

surgical_fire 27 minutes ago | parent [-]

> The original comment had the exact verbose hedging you are asking for when discussing technical subjects.

Is this how you normally speak when you find a bug in software? You hedge language around marketing talking points?

I sincerely doubt that. When people find bugs in software they just say that the software is buggy.

But for LLM there's this ridiculous roundabout about "pattern matching behaving as if it wanted something" which is a roundabout way to aacribe intentionality.

If you said this about your OS people qould look at you funny, or assume you were joking.

Sorry, I don't think I am in the wrong for asking people to think more critically about this shit.

margalabargala 14 minutes ago | parent [-]

> Is this how you normally speak when you find a bug in software? You hedge language around marketing talking points?

I'm sorry, what are you asking for exactly? You were upset because you hallucinated that I said the LLM "wanted" something, and now you're upset that I used the exact technically correct language you specifically requested because it's not how people "normally" speak?

Sounds like the constant is just you being upset, regardless of what people say.

People say things like "the program is trying to do X", when obviously programs can't try to do a thing, because that implies intention, and they don't have agency. And if you say your OS is lying to you, people will treat that as though the OS is giving you false information when it should have different true information. People have done this for years. Here's an example: https://learn.microsoft.com/en-us/answers/questions/2437149/...

holoduke 3 hours ago | parent | prev [-]

Its not patterns engine. It's a association prediction engine.

criley2 5 hours ago | parent | prev | next [-]

We are talking about LLM's not humans.

5 hours ago | parent | prev [-]
[deleted]