| ▲ | _heimdall 3 days ago |
| This is the problem with LLM researchers all but giving up on the problem of inspecting how the LLM actually works internally. As long as the LLM is a black box, its entirely possible that (a) the LLM does reason through the rules and understands what moves are legal or (b) was trained on a large set of legal moves and therefore only learned to make legal moves. You can claim either case is the real truth, but we have absolutely no way to know because we have absolutely no way to actually understand what the LLM was "thinking". |
|
| ▲ | codeulike 3 days ago | parent | next [-] |
| Here's an article where they teach an LLM Othello and then probe its internal state to assess whether it is 'modelling' the Othello board internally https://thegradient.pub/othello/ Associated paper: https://arxiv.org/abs/2210.13382 |
| |
|
| ▲ | mattmcknight 3 days ago | parent | prev | next [-] |
| It's weird because it is not a black box at the lowest level, we can see exactly what all of the weights are doing. It's just too complex for us to understand it. What is difficult is finding some intermediate pattern in between there which we can label with an abstraction that is compatible with human understanding. It may not exist. For example, it may be more like how our brain works to produce language than it is like a logical rule based system. We occasionally say the wrong word, skip a word, spell things wrong...violate the rules of grammar. The inputs and outputs of the model are human language, so at least there we know the system as a black box can be characterized, if not understood. |
| |
| ▲ | _heimdall 3 days ago | parent [-] | | > The inputs and outputs of the model are human language, so at least there we know the system as a black box can be characterized, if not understood. This is actually where the AI safety debates tend to lose. From where I sit we can't characterize the black box itself, we can only characterize the outputs themselves. More specifically, we can decide what we think the quality of the output for the given input and we can attempt to infer what might have happened in between. We really have no idea what happened in between, and though many of the "doomers" raise concerns that seem far fetched, we have absolutely no way of understanding whether they are completely off base or raising concerns of a system that just hasn't shown problems in the input/output pairs yet. |
|
|
| ▲ | lukeschlather 3 days ago | parent | prev | next [-] |
| > (a) the LLM does reason through the rules and understands what moves are legal or (b) was trained on a large set of legal moves and therefore only learned to make legal moves. How can you learn to make legal moves without understanding what moves are legal? |
| |
| ▲ | _heimdall 3 days ago | parent | next [-] | | I'm spit balling here so definitely take this with a grain of salt. If I only see legal moves, I may not think outside the box come up with moves other than what I already saw. Humans run into this all the time, we see things done a certain and effectively learn that that's just how to do it and we don't innovate. Said differently, if the generative AI isn't actually being generative at all, meaning its just predicting based on the training set, it could be providing only legal moves without ever learning or understanding the rules of the game. | | |
| ▲ | adelineJoOs 3 days ago | parent [-] | | I am not a ML person, and know there is an mathematical explanation for what I am about to write, but here comes my informal reasoning: I fear this is not the case:
1) Either, the LLM (or other forms of deep neural networks) can reproduce exactly what it saw, but nothing new (then it would only produce legal moves, if it was trained on only legal ones)
2) Or, the LLM can produce moves that it did not exactly see, by outputting the "most probable" looking move in that situation (which it never has seen before). In effect, this is combining different situations and their output into a new output. As a result of this "mixing", it might output an illegal move (= the output move is illegal in this new situation), despite having been trained on only legal moves. In fact, I am not even sure if the deep neuronal networks we use in practice even can replicate their training data exactly - it seems to me that there is some kind of compression going on by embedding knowledge into the network, which will come with a loss. I am deeply convinced that LLMs will never be exact technology (but LLMs + other technology like proof assistants or compilers might be) | | |
| ▲ | _heimdall 3 days ago | parent [-] | | Oh I don't think there is any expectation for LLMs to reproduce any training data exactly. By design an LLM is a lossy compression algorithm, data can't be expected to be an exact reproduction. The question I have is whether the LLM might be reproducing mostly legal moves only because it was trained on a set of data that itself only included legal moves. The training data would have only helped predict legal moves, and any illegal moves it predicts may very well be because the LLMs are design with random variables as part of the prediction loop. |
|
| |
| ▲ | ramraj07 3 days ago | parent | prev [-] | | I think they’ll acknowledge these models are truly intelligent only when the LLMs also irrationally go circles around logic to insist LLMs are statistical parrots. | | |
| ▲ | _heimdall 3 days ago | parent [-] | | Acknowledging an LLM is intelligent requires a general agreement of what intelligence is and how to measure it. I'd also argue that it requires a way of understanding how an LLM comes to its answer rather than just inputs and outputs. To me that doesn't seem unreasonable and has nothing to do with irrationally going in circles, curious if you disagree though. | | |
| ▲ | Retric 3 days ago | parent [-] | | Humans judge if other humans are intelligent without going into philosophical circles. How well they learn completely novel tasks (fail in conversation, pass with training).
How well they do complex tasks (debated just look at this thread).
How generally knowledgeable they are (pass).
How often they do non sensical things (fail). So IMO it really comes down if you’re judging by peak performances or minimum standards. If I had an employee that preformed as well as an LLM I’d call them an idiot because they needed constant supervision for even trivial tasks, but that’s not the standard everyone is using. | | |
| ▲ | _heimdall 3 days ago | parent [-] | | > Humans judge if other humans are intelligent without going into philosophical circles That's totally fair. I expect that to continue to work well when kept in the context of something/someone else that is roughly as intelligent as you are. Bonus points for the fact that one human understands what it means to be human and we all have roughly similar experiences of reality. I'm not so sure if that kind of judging intelligence by feel works when you are judging something that is (a) totally different from your or (b) massively more (or less) intelligent than you are. For example, I could see something much smarter than me as acting irrationally when in reality they may be working with a much larger or complex set of facts and context that don't make sense to me. |
|
|
|
|
|
| ▲ | raincole 3 days ago | parent | prev [-] |
| > we have absolutely no way to know To me, this means that it absolutely doesn't matter whether LLM does reason or not. |
| |
| ▲ | _heimdall 3 days ago | parent [-] | | It might if AI/LLM safety is a concern. We can't begin to really judge safety without understanding how they work internally. |
|