| ▲ | lukev 5 hours ago |
| This works for people. A LLM is stateless. Even if you believe that consciousness could somehow emerge during a forward pass, it would be a brief flicker lasting no longer than it takes to emit a single token. |
|
| ▲ | hackrmn 5 hours ago | parent | next [-] |
| > A LLM is stateless Unless you mean by that something entirely different than what most people specifically on Hacker News, of all places, understand with "stateless", most and myself included, would disagree with you regarding the "stateless" property. If you do mean something entirely different than implying an LLM doesn't transition from a state to a state, potentially confined to a limited set of states through finite immutable training data set and accessible context and lack of PRNG, then would you care to elaborate? Also, it can be stateful _and_ without a consciousness. Like a finite automaton? I don't think anyone's claiming (yet) any of the models today have consciousness, but that's mostly because it's going to be practically impossible to prove without some accepted theory of consciousness, I guess. |
| |
| ▲ | lukev 5 hours ago | parent [-] | | So obviously there is a lot of data in the parameters. But by stateless, I mean that a forward pass is a pure function over the context window. The only information shared between each forward pass is the context itself as it is built. I certainly can't define consciousness, but it feels like some sort of existence or continuity over time would have to be a prerequisite. |
|
|
| ▲ | andrewflnr 5 hours ago | parent | prev | next [-] |
| An agent is notably not stateless. |
| |
| ▲ | lukev 5 hours ago | parent [-] | | Yes, but the state is just the prompt and the text already emitted. You could assert that text can encode a state of consciousness, but that's an incredibly bold claim with a lot of implications. | | |
| ▲ | andrewflnr 2 hours ago | parent | next [-] | | It's a bold claim for sure, and not one that I agree with, but not one that's facially false either. We're approaching a point where we will stop having easy answers for why computer systems can't have subjective experience. | |
| ▲ | falcor84 4 hours ago | parent | prev [-] | | You're conflating state and consciousness. Clawbots in particular are agents that persist state across conversations in text files and optionally in other data stores. | | |
| ▲ | lukev 4 hours ago | parent [-] | | I am not sure how to define consciousness, but I can't imagine a definition that doesn't involve state or continuity across time. | | |
| ▲ | falcor84 2 hours ago | parent | next [-] | | It sounds like we're in agreement. Present-day AI agents clearly maintain state over time, but that on its own is insufficient for consciousness. On the other side of the coin though, I would just add that I believe that long-term persistent state is a soft, rather than hard requirement for consciousness - people with anterograde amnesia are still conscious, right? | |
| ▲ | esafak 2 hours ago | parent | prev [-] | | Current agents "live" in discretized time. They sporadically get inputs, process it, and update their state. The only thing they don't currently do is learn (update their models). What's your argument? |
|
|
|
|
|
| ▲ | OkayPhysicist 5 hours ago | parent | prev [-] |
| While I'm definitely not in the "let's assign the concept of sentience to robots" camp, your argument is a bit disingenuous. Most modern LLM systems apply some sort of loop over previously generated text, so they do, in fact, have state. |