Remix.run Logo
vladsh 3 hours ago

LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Agents, however, are products. They should have clear UX boundaries: show what context they’re using, communicate uncertainty, validate outputs where possible, and expose performance so users can understand when and why they fail.

IMO the real issue is that raw, general-purpose models were released directly to consumers. That normalized under-specified consumer products, created the expectation that users would interpret model behavior, define their own success criteria, and manually handle edge cases, sometimes with severe real world consequences.

I’m sure the market will fix itself with time, but I hope more people would know when not to use these half baked AGI “products”

DuperPower an hour ago | parent | next [-]

because they wanted to sell the illusion of consciousness, chatgpt, gemini and claude are humans simulator which is lame, I want autocomplete prediction not this personality and retention stuff which only makes the agents dumber.

basch 2 hours ago | parent | prev | next [-]

they are human in the sense they are reenforced to exhibit human like behavior, by humans. a human byproduct.

adleyjulian 2 hours ago | parent | prev [-]

> LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Per the predictive processing theory of mind, human brains are similarly predictive machines. "Psychology" is an emergent property.

I think it's overly dismissive to point to the fundamentals being simple, i.e. that it's a token prediction algorithm, when it's clear to everyone that it's the unexpected emergent properties of LLMs that everyone is interested in.

xoac 2 hours ago | parent | next [-]

The fact that a theory exists does not mean that it is not garbage

estearum an hour ago | parent [-]

So surely you can demonstrate how the brain is doing much different than this, and go ahead to collect your Nobel?

imiric an hour ago | parent | prev | next [-]

The difference is that we know how LLMs work. We know exactly what they process, how they process it, and for what purpose. Our inability to explain and predict their behavior is due to the mind-boggling amount of data and processing complexity that no human can comprehend.

In contrast, we know very little about human brains. We know how they work at a fundamental level, and we have vague understanding of brain regions and their functions, but we have little knowledge of how the complex behavior we observe actually works. The complexity is also orders of magnitude greater than what we can model with current technology, but it's very much an open question whether our current deep learning architectures are even the right approach to model this complexity.

So, sure, emergent behavior is neat and interesting, but just because we can't intuitively understand a system, doesn't mean that we're on the right track to model human intelligence. After all, we find the patterns of the Game of Life interesting, yet the rules for such a system are very simple. LLMs are similar, only far more complex. We find the patterns they generate interesting, and potentially very useful, but anthropomorphizing this technology, or thinking that we have invented "intelligence", is wishful thinking and hubris. Especially since we struggle with defining that word to begin with.

adleyjulian an hour ago | parent [-]

At no point did I say LLMs have human intelligence nor that they model human intelligence. I also didn't say that they are the correct path towards it, though the truth is we don't know.

The point is that one could similarly be dismissive of human brains, saying they're prediction machines built on basic blocks of neuro chemistry and such a view would be asinine.

dingnuts an hour ago | parent | prev [-]

[dead]