| ▲ | Certhas 6 days ago |
| I don't understand what point you're hinting at. Either way, I can get arbitrarily good approximations of arbitrary nonlinear differential/difference equations using only linear probabilistic evolution at the cost of a (much) larger state space. So if you can implement it in a brain or a computer, there is a sufficiently large probabilistic dynamic that can model it. More really is different. So I view all deductive ab-initio arguments about what LLMs can/can't do due to their architecture as fairly baseless. (Note that the "large" here is doing a lot of heavy lifting. You need _really_ large. See https://en.m.wikipedia.org/wiki/Transfer_operator) |
|
| ▲ | measurablefunc 6 days ago | parent | next [-] |
| What part about backtracking is baseless? Typical Prolog interpreters can be implemented in a few MBs of binary code (the high level specification is even simpler & can be in a few hundred KB)¹ but none of the LLMs (open source or not) are capable of backtracking even though there is plenty of room for a basic Prolog interpreter. This seems like a very obvious shortcoming to me that no amount of smooth approximation can overcome. If you think there is a threshold at which point some large enough feedforward network develops the capability to backtrack then I'd like to see your argument for it. ¹https://en.wikipedia.org/wiki/Warren_Abstract_Machine |
| |
| ▲ | Certhas 6 days ago | parent | next [-] | | I know that if you go large enough you can do any finite computation using only fixed transition probabilities. This is a trivial observation. To repeat what I posted elsewhere in this thread: Take a finite tape Turing machine with N states and tape length T and N^T total possible tape states. Now consider that you have a probability for each state instead of a definite state. The transitions of the Turing machine induce transitions of the probabilities. These transitions define a Markov chain on a N^T dimensional probability space. Is this useful? Absolutely not. It's just a trivial rewriting. But it shows that high dimensional spaces are extremely powerful. You can trade off sophisticated transition rules for high dimensionality. You _can_ continue this line of thought though in more productive directions. E.g. what if the input of your machine is genuinely uncertain? What if the transitions are not precise but slightly noisy? You'd expect that the fundamental capabilities of a noisy machine wouldn't be that much worse than those of a noiseless ones (over finite time horizons). What if the machine was built to be noise resistant in some way? All of this should regularize the Markov chain above. If it's more regular you can start thinking about approximating it using a lower rank transition matrix. The point of this is not to say that this is really useful. It's to say that there is no reason in my mind to dismiss the purely mathematical rewriting as entirely meaningless in practice. | |
| ▲ | skissane 6 days ago | parent | prev | next [-] | | > but none of the LLMs (open source or not) are capable of backtracking even though there is plenty of room for a basic Prolog interpreter. This seems like a very obvious shortcoming to me that no amount of smooth approximation can overcome. The fundamental autoregressive architecture is absolutely capable of backtracking… we generate next token probabilities, select a next token, then calculate probabilities for the token thereafter. There is absolutely nothing stopping you from “rewinding” to an earlier token, making a different selection and replaying from that point. The basic architecture absolutely supports it. Why then has nobody implemented it? Maybe, this kind of backtracking isn’t really that useful. | | |
| ▲ | versteegen 6 days ago | parent | next [-] | | Yes, but anyway, LLMs themselves are perfectly capable of backtracking reasoning while sampling is run forwards only, in the same way humans do: by deciding something doesn't work and trying something else. Humans DON'T time travel backwards in time and never have the erroneous thought in the first place. | |
| ▲ | measurablefunc 6 days ago | parent | prev [-] | | Where is this spelled out formally and proven logically? | | |
| ▲ | skissane 6 days ago | parent [-] | | LLM backtracking is an active area of research, see e.g. https://arxiv.org/html/2502.04404v1 https://arxiv.org/abs/2306.05426 And I was wrong that nobody has implemented it, as these papers prove people have… it is just the results haven’t been sufficiently impressive to support the transition from the research lab to industrial use - or at least, not yet | | |
| ▲ | measurablefunc 6 days ago | parent | next [-] | | > Empirical evaluations demonstrate that our proposal significantly enhances the reasoning capabilities of LLMs, achieving a performance gain of over 40% compared to the optimal-path supervised fine-tuning method. | |
| ▲ | afiori 6 days ago | parent | prev [-] | | I would expect to see something like this soonish as around now we are seeing the end of training scaling and the beginning of inference scaling | | |
| ▲ | foota 6 days ago | parent [-] | | This is a neat observation, training has been optimized to hell and inference is just beginning. |
|
|
|
| |
| ▲ | bondarchuk 6 days ago | parent | prev [-] | | Backtracking makes sense in a search context which is basically what prolog is. Why would you expect a next-token-predictor to do backtracking and what should that even look like? | | |
| ▲ | PaulHoule 6 days ago | parent | next [-] | | If you want general-purpose generation than it has to be able to respect constraints (e.g. figure art of a person has 0..1 belly buttons, 0..2 legs is unspoken) as it is generative models usually get those things right but don't always if they can stick together the tiles they use internally in some combination that makes sense locally but not globally. General intelligence may not be SAT/SMT solving but it has to be able to do it, hence, backtracking. Today I had another of those experiences of the weaknesses of LLM reasoning, one that happens a lot when doing LLM-assisted coding. I was trying to figure out how to rebuild some CSS after the HTML changed for accessibility purposes and got a good idea for how to do it from talking to the LLM but at that point the context was poisoned, probably because there was a lot of content about the context describing what we were thinking about at different stages of the conversation which evolved considerably. It lost its ability to follow instructions and I'd tell it specifically to do this or do that and it just wouldn't do it properly and this happens a lot if a session goes on too long. My guess is that the attention mechanism is locking on to parts of the conversation which are no longer relevant to where I think we're at and in general the logic that considers the variation of either a practice (instances) or a theory over time is a very tricky problem and 'backtracking' is a specific answer for maintaining your knowledge base across a search process. | | |
| ▲ | photonthug 6 days ago | parent | next [-] | | > General intelligence may not be SAT/SMT solving but it has to be able to do it, hence, backtracking. Just to add some more color to this. For problems that completely reduce to formal methods or have significant subcomponents that involve it, combinatorial explosion in state-space is a notorious problem and N variables is going to stick you with 2^N at least. It really doesn't matter whether you think you're directly looking at solving SAT/search, because it's too basic to really be avoided in general. When people talk optimistically about hallucinations not being a problem, they generally mean something like "not a problem in the final step" because they hope they can evaluate/validate something there, but what about errors somewhere in the large middle? So even with a very tiny chance of hallucinations in general, we're talking about an exponential number of opportunities in implicit state-transitions to trigger those low-probability errors. The answer to stuff like this is supposed to be "get LLMs to call out to SAT solvers". Fine, definitely moving from state-space to program-space is helpful, but it also kinda just pushes the problem around as long as the unconstrained code generation is still prone to hallucination.. what happens when it validates, runs, and answers.. but the spec was wrong? Personally I'm most excited about projects like AlphaEvolve that seem fearless about hybrid symbolics / LLMs and embracing the good parts of GOFAI that LLMs can make tractable for the first time. Instead of the "reasoning is dead, long live messy incomprehensible vibes", those guys are talking about how to leverage earlier work, including things like genetic algorithms and things like knowledge-bases.[0] Especially with genuinely new knowledge-discovery from systems like this, I really don't get all the people who are still staunchly in either an old-school / new-school camp on this kind of thing. [0]: MLST on the subject: https://www.youtube.com/watch?v=vC9nAosXrJw | | |
| ▲ | PaulHoule 5 days ago | parent [-] | | When I was interested in information extraction I saw the problem of resolving language to a semantic model [1] as containing an SMT problem. That is, words are ambiguous, sentences can parse different ways, you have to resolve pronouns and explicit subjects, objects and stuff like that. Seen that way the text is a set of constraints with a set of variables for all the various choices you make determining it. And of course there is a theory of the world such that "causes must precede their effects" and all the world knowledge about instances such as "Chicago is in Illinois". The problem is really worse than that because you'll have to parse sentences that weren't generated by sound reasoners or that live in a different microtheory, deal with situations that are ambiguous anyway, etc. Which is why that program never succeeded. [1] in short: database rows |
| |
| ▲ | XenophileJKO 6 days ago | parent | prev [-] | | What if you gave the model a tool to "willfully forget" a section of context. That would be easy to make. Hmm I might be onto something. | | |
| ▲ | PaulHoule 6 days ago | parent [-] | | I guess you could have some kind of mask that would let you suppress some of the context from matching, but my guess is that kind of thing might cause problems as often as it solves them. Back when I was thinking about commonsense reasoning with logic it was obviously a much more difficult problem to add things like "P was true before time t", "there will be some time t in the future such at P is true", "John believes Mary believes that P is true", "It is possible that P is true", "there is some person q who believes that P is true", particularly when you combine these qualifiers. For one thing you don't even have a sound and complete strategy for reasoning over first-order logic + arithmetic but you also have a combinatorical explosion over the qualifiers. Back in the day I thought it was important to have sound reasoning procedures but one of the reasons none of my foundation models ever became ChatGPT was that I cared about that and I really needed to ask "does change C cause an unsound procedure to get the right answer more often?" and not care if the reasoning procedure was sound or not. |
|
| |
| ▲ | measurablefunc 6 days ago | parent | prev [-] | | I don't expect a Markov chain to be capable of backtracking. That's the point I am making. Logical reasoning as it is implemented in Prolog interpreters is not something that can be done w/ LLMs regardless of the size of their weights, biases, & activation functions between the nodes in the graph. | | |
| ▲ | bondarchuk 6 days ago | parent | next [-] | | Imagine the context window contains A-B-C, C turns out a dead end and we want to backtrack to B and try another branch. Then the LLM could produce outputs such that the context window would become A-B-C-[backtrack-back-to-B-and-don't-do-C] which after some more tokens could become A-B-C-[backtrack-back-to-B-and-don't-do-C]-D. This would essentially be backtracking and I don't see why it would be inherently impossible for LLMs as long as the different branches fit in context. | | |
| ▲ | measurablefunc 6 days ago | parent [-] | | If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain. This is a simple enough problem that can be implemented in a few dozen lines of Prolog but I've never seen a solver implemented as a Markov chain. | | |
| ▲ | Ukv 6 days ago | parent | next [-] | | > If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain Have each of the Markov chain's states be one of 10^81 possible sudoku grids (a 9x9 grid of digits 1-9 and blank), then calculate the 10^81-by-10^81 transition matrix that takes each incomplete grid to the valid complete grid containing the same numbers. If you want you could even have it fill one square at a time rather than jump right to the solution, though there's no need to. Up to you what you do for ambiguous inputs (select one solution at random to give 1.0 probability in the transition matrix? equally weight valid solutions? have the states be sets of boards and map to set of all valid solutions?) and impossible inputs (map to itself? have the states be sets of boards and map to empty set?). Could say that's "cheating" by pre-computing the answers and hard-coding them in a massive input-output lookup table, but to my understanding that's also the only sense in which there's equivalence between Markov chains and LLMs. | | |
| ▲ | measurablefunc 6 days ago | parent [-] | | There are multiple solutions for each incomplete grid so how are you calculating the transitions for a grid w/ a non-unique solution? Edit: I see you added questions for the ambiguities but modulo those choices your solution will almost work b/c it is not extensionally equivalent entirely. The transition graph and solver are almost extensionally equivalent but whereas the Prolog solver will backtrack there is no backtracking in the Markov chain and you have to re-run the chain multiple times to find all the solutions. | | |
| ▲ | Ukv 6 days ago | parent | next [-] | | > but whereas the Prolog solver will backtrack there is no backtracking in the Markov chain and you have to re-run the chain multiple times to find all the solutions If you want it to give all possible solutions at once, you can just expand the state space to the power-set of sudoku boards, such that the input board transitions to the state representing the set of valid solved boards. | | |
| ▲ | measurablefunc 6 days ago | parent | next [-] | | That still won't work b/c there is no backtracking. The point is that there is no way to encode backtracking/choice points like in Prolog w/ a Markov chain. The argument you have presented is not extensionally equivalent to the Prolog solver. It is almost equivalent but it's missing choice points for starting at a valid solution & backtracking to an incomplete board to generate a new one. The typical argument for absorbing states doesn't work b/c sudoku is not a typical deterministic puzzle. | | |
| ▲ | Ukv 6 days ago | parent [-] | | > That still won't work b/c there is no backtracking. It's essentially just a lookup table mapping from input board to the set of valid output boards - there's no real way for it not to work (obviously not practical though). If board A has valid solutions B, C, D, then the transition matrix cell mapping {A} to {B, C, D} is 1.0, and all other entries in that row are 0.0. > The point is that there is no way to encode backtracking/choice points You can if you want, keeping the same variables as a regular sudoku solver as part of the Markov chain's state and transitioning instruction-by-instruction, rather than mapping directly to the solution - just that there's no particular need to when you've precomputed the solution. | | |
| ▲ | measurablefunc 6 days ago | parent [-] | | My point is that your initial argument was missing several key pieces & if you specify the entire state space you will see that it's not as simple as you thought initially. I'm not saying it can't be done but that it's actually much more complicated than simply saying just take an incomplete board state s & uniform transitions between s, s' for valid solutions s' that are compatible with s. In fact, now that I spelled out the issues I still don't think this is a formal extensional equivalence. Prolog has interactive transitions between the states & it tracks choice points so compiling a sudoku solver to a Markov chain requires more than just tracking the board state in the context. | | |
| ▲ | Ukv 6 days ago | parent [-] | | > My point is that your initial argument was missing several key pieces My initial example was a response to "If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain", describing how a Sudoku solver could be implemented as a Markov chain. I don't think there's anything missing from it - it solves all proper Sudokus, and I only left open the choice of how to handle improper Sudokus because that was unspecified (but trivial regardless of what's wanted). > I'm not saying it can't be done but that it's actually much more complicated If that's the case, then I did misinterpret your comments as saying it can't be done. But, I don't think it's really complicated regardless of whatever "ok but now it must encode choice points in its state" are thrown at it - it's just a state-to-state transition look-up table. > so compiling a sudoku solver to a Markov chain requires more than just tracking the board state in the context. As noted, you can keep all the same variables as a regular Sudoku solver as part of the Markov chain's state and transition instruction-by-instruction, if that's what you want. If you mean inputs from a user, the same is true of LLMs which are typically ran interactively. Either model the whole universe including the user as part of state transition table (maybe impossible, depending on your beliefs about the universe), or have user interaction take the current state, modify it, and use it as initial state for a new run of the Markov chain. | | |
| ▲ | measurablefunc 6 days ago | parent [-] | | > As noted, you can keep all the same variables as a regular Sudoku solver What are those variables exactly? | | |
| ▲ | Ukv 6 days ago | parent [-] | | For a depth-first solution (backtracking), I'd assume mostly just the partial solutions and a few small counters/indices/masks - like for tracking the cell we're up to and which cells were prefilled. Specifics will depend on the solver, but can be made part of Markov chain's state regardless. |
|
|
|
|
| |
| ▲ | Certhas 5 days ago | parent | prev [-] | | People really don't appreciate what is possible in infinite (or more precisely: arbitrarily high) dimensional spaces. |
| |
| ▲ | 6 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | bboygravity 6 days ago | parent | prev | next [-] | | The LLM can just write the Prolog and solve the sudoku that way. I don't get your point. LLMs like Grok 4 can probably one-shot this today with the current state of art. You can likely just ask it to solve any sudoku and it will do it (by writing code in the background and running it and returning the result). And this is still very early stage compared to what will be out a year from now. Why does it matter how it does it or whether this is strictly LLM or LLM with tools for any practical purpose? | | |
| ▲ | PhunkyPhil 6 days ago | parent [-] | | The point isn't if the output is correct or not, it's if the actual net is doing "logical computation" ala Prolog. What you're suggesting is akin to me saying you can't build a house, then you go and hire someone to build a house. _You_ didn't build the house. | | |
| ▲ | kaibee 6 days ago | parent [-] | | I feel like you're kinda proving too much. By the same reasoning, humans/programmers aren't generally intelligent either, because we can only mentally simulate relatively small state spaces of programs, and when my boss tells me to go build a tool, I'm not exactly writing raw x86 assembly. I didn't _build_ the tool, I just wrote text that instructed a compiler how to build the tool. Like the whole reason we invented SAT solvers is because we're not smart in that way. But I feel like you're trying to argue that LLMs at any scale gonna be less capable than an average person? |
|
| |
| ▲ | lelanthran 6 days ago | parent | prev | next [-] | | > If you think it is possible then I'd like to see an implementation of a sudoku puzzle solver as Markov chain. This is a simple enough problem that can be implemented in a few dozen lines of Prolog but I've never seen a solver implemented as a Markov chain. I think it can be done. I started a chatbot that works like this some time back (2024) but paused work on it since January. In brief, you shorten the context by discarding the context that didn't work out. | |
| ▲ | sudosysgen 6 days ago | parent | prev [-] | | You can do that pretty trivially for any fixed size problem (as in solvable with a fixed-sized tape Turing machine), you'll just have a titanically huge state space. The claim of the LLM folks is that the models have a huge state space (they do have a titanically huge state space) and can navigate it efficiently. Simply have a deterministic Markov chain where each state is a possible value of the tape+state of the TM and which transitions accordingly. | | |
|
| |
| ▲ | Certhas 6 days ago | parent | prev | next [-] | | Take a finite tape Turing machine with N states and tape length T and N^T total possible tape states. Now consider that you have a probability for each state instead of a definite state. The transitions of the Turing machine induce transitions of the probabilities. These transitions define a Markov chain on a N^T dimensional probability space. Is this useful? Absolutely not. It's just a trivial rewriting. But it shows that high dimensional spaces are extremely powerful. You can trade off sophisticated transition rules for high dimensionality. | |
| ▲ | vidarh 6 days ago | parent | prev | next [-] | | A (2,3) Turing machine can be trivially implemented with a loop around an LLM that treats the context as an IO channel, and a Prolog interpreter runs on a Turing complete computer, and so per Truing equivalence you can run a Prolog interpreter on an LLM. Of course this would be pointless, but it demonstrates that a system where an LLM provides the logic can backtrack, as there's nothing computationally special about backtracking. That current UIs to LLMs are set up for conversation-style use that makes this harder isn't an inherent limitation of what we can do with LLMs. | | |
| ▲ | measurablefunc 6 days ago | parent [-] | | Loop around an LLM is not an LLM. | | |
| ▲ | vidarh 6 days ago | parent [-] | | Then no current systems you are using are LLMs | | |
| ▲ | measurablefunc 6 days ago | parent [-] | | Choice-free feedforward graphs are LLMs. The inputs/outputs are extensionally equivalent to context and transition probabilities of a Markov chain. What exactly is your argument b/c what it looks like to me is you're simply making a Turing tarpit argument which does not address any of my points. | | |
| ▲ | vidarh 6 days ago | parent [-] | | My argument is that artificially limiting what you argue about to a subset of the systems people are actually using and arguing about the limitations of that makes your argument irrelevant to what people are actually using. | | |
| ▲ | measurablefunc 4 days ago | parent [-] | | So where is the error exactly? Loop around is simply a repetition of the argument for the equivalence between an LLM & a Markov chain. It doesn't matter how many times you sample the trajectories from either one, they're still extensionally equivalent. | | |
| ▲ | vidarh 2 days ago | parent [-] | | Since an LLM with a loop is trivially and demonstrably Turing complete if you allow it to use the context as an IO channel (and thereby memory), by extension arguing there's some limitation that prevents an LLM from doing what Prolog can is logically invalid. In other words, this claim is categorically false: > Logical reasoning as it is implemented in Prolog interpreters is not something that can be done w/ LLMs regardless of the size of their weights, biases, & activation functions between the nodes in the graph. What is limiting "just" an LLM is not the ability of the model to encode reasoning, but the lack of a minimal and trivial runtime scaffolding to let it use it's capabilities. | | |
| ▲ | measurablefunc 2 days ago | parent [-] | | > Since an LLM with a loop is trivially and demonstrably Turing complete Where is the demonstration? | | |
| ▲ | vidarh 2 days ago | parent [-] | | In every LLM app you have available that can set temperature. You can try a template like this: Turing machine transition table:
q₀, 0 → q₀, 0, R
q₀, 1 → q₁, 1, R
q₀, B → q₀, B, L
q₁, 0 → q₁, 0, R
q₁, 1 → q₀, 1, R
q₁, B → q₁, B, L
Current state: [STATE]
Current symbol: [SYMBOL]
What is the next transition?
Format: NEW_STATE|WRITE_SYMBOL|DIRECTION
You might have to tweak it depending on the given model you seek, but given you set temperature to 0, once you've tweaked it so it works on that model for all 6 combinations of state and symbol, it will continue working.Repeat it, and provide a loop with the IO as indicated by the output. If you struggle with the notion that an LLM can handle a lookup table from 6 combinations of states and symbols to 6 tuples, you're looking for excuses to disagree and/or don't understand how simple a UTM is. The above is a sufficient definition for someone who understands a UTM to figure out how to execute the machine step by step. | | |
| ▲ | measurablefunc a day ago | parent [-] | | Let me know when you manage to implement the Fibonacci sequence w/ this. It should be doable since you seem to think you already have a Turing machine implemented on an LLM w/ a loop around it. |
|
|
|
|
|
|
|
|
| |
| ▲ | 6 days ago | parent | prev [-] | | [deleted] |
|
|
|
|
| ▲ | baselessness 6 days ago | parent | prev | next [-] |
| That's what this debate has been reduced to. People point out the logical and empirical, by now very obvious limitation of LLMs. And boosters are the equivalent of Chopra's "quantum physics means anything is possible" saying "if you add enough information to a system anything is possible". |
| |
| ▲ | yorwba 5 days ago | parent [-] | | The argument isn't that anything is possible for LLMs, but that representing LLMs as Markov chains doesn't demonstrate a limitation, because the resulting Markov chain would be huge, much larger than the LLM, and anything that is possible is possible with a large enough Markov chain. If you limit yourself to Markov chains where the full transition matrix can be stored in a reasonable amount of space (which is the kind of Markov chain that people usually have in mind when they think that Markov chains are very limited), LLMs cannot be represented as such a Markov chain. If you want to show limitations of LLMs by reducing them to another system of computation, you need to pick one that is more limited than LLMs appear to be, not less. | | |
| ▲ | ariadness 5 days ago | parent [-] | | > anything that is possible is possible with a large enough Markov chain This is not true. Do you mean anything that is possible to compute? If yes than you missed the point entirely. | | |
| ▲ | yorwba 5 days ago | parent [-] | | It's mostly a consequence of the laws of physics having the Markov property. So the time evolution of any physical system can be modeled as a Markov process. Of course the corresponding state space may in general be infinite. |
|
|
|
|
| ▲ | awesome_dude 6 days ago | parent | prev | next [-] |
| I think that the difference can be best explained thus: I guess that you are most likely going to have cereal for breakfast tomorrow, I also guess that it's because it's your favourite. vs I understand that you don't like cereal for breakfast, and I understand that you only have it every day because a Dr told you that it was the only way for you to start the day in a way that aligns with your health and dietary needs. Meaning, I can guess based on past behaviour and be right, but understanding the reasoning for those choices, that's a whole other ballgame. Further, if we do end up with an AI that actually understands, well, that would really open up creativity, and problem solving. |
| |
| ▲ | quantummagic 6 days ago | parent [-] | | How are the two cases you present fundamentally different? Aren't they both the same _type_ of knowledge? Why do you attribute "true understanding" to the case of knowing what the Dr said? Why stop there? Isn't true understanding knowing why we trust what the doctor said (all those years of schooling, and a presumption of competence, etc)? And why stop there? Why do we value years of schooling? Understanding, can always be taken to a deeper level, but does that mean we didn't "truly" understand earlier? And aren't the data structures needed to encode the knowledge, exactly the same for both cases you presented? | | |
| ▲ | awesome_dude 6 days ago | parent [-] | | When you ask that question, why don't you just use a corpus of the previous answers to get some result? Why do you need to ask me, isn't a guess based on past answers good enough? Or, do you understand that you need to know more, you need to understand the reasoning based on what's missing from that post? | | |
| ▲ | quantummagic 6 days ago | parent [-] | | I asked that question in an attempt to not sound too argumentative. It was rhetorical. I'm asking you to consider the fact that there isn't actually any difference between the two examples you provided. They're fundamentally the same type of knowledge. They can be represented by the same data structures. There's _always_ something missing, left unsaid in every example, it's the nature of language. As for your example, the LLM can be trained to know the underlying reasons (doctor's recommendation, etc.). That knowledge is not fundamentally different from the knowledge that someone tends to eat cereal for breakfast. My question to you, was an attempt to highlight that the dichotomy you were drawing, in your example, doesn't actually exist. | | |
| ▲ | awesome_dude 6 days ago | parent [-] | | > They're fundamentally the same type of knowledge. They can be represented by the same data structures. Maybe, maybe one is based on correlation, the other causation. | | |
| ▲ | quantummagic 6 days ago | parent [-] | | What if the causation had simply been that he enjoyed cereal for breakfast? In either case, the results are the same, he's eating cereal for breakfast. We can know this fact without knowing the underlying cause. Many times, we don't even know the cause of things we choose to do for ourselves, let alone what others do. On top of which, even if you think the "cause" is that the doctor told him to eat a healthy diet, do you really know the actual cause? Maybe the real cause, is that the girl he fancies, told him he's not in good enough shape. The doctor telling him how to get in shape is only a correlation, the real cause is his desire to win the girl. These connections are vast and deep, but they're all essentially the same type of knowledge, representable by the same data structures. | | |
| ▲ | awesome_dude 6 days ago | parent [-] | | > In either case, the results are the same, he's eating cereal for breakfast. We can know this fact without knowing the underlying cause. Many times, we don't even know the cause of things we choose to do for ourselves, let alone what others do. Yeah, no. Understanding the causation allows the system to provide a better answer. If they "enjoy" cereal, what about it do they enjoy, and what other possible things can be had for breakfast that also satisfy that enjoyment. You'll never find that by looking only at the fact that they have eaten cereal for breakfast. And the fact that that's not obvious to you is why I cannot be bothered going into any more depth on the topic any more. It's clear that you don't have any understanding on the topic beyond a superficial glance. Bye :) |
|
|
|
|
|
|
|
| ▲ | arduanika 6 days ago | parent | prev | next [-] |
| What hinting? The comment was very clear. Arbitrarily good approximation is different from symbolic understanding. "if you can implement it in a brain" But we didn't. You have no idea how a brain works. Neither does anyone. |
| |
| ▲ | mallowdram 6 days ago | parent | next [-] | | We know the healthy brain is unpredictable. We suspect error minimization and prediction are not central tenets. We know the brain creates memory via differences in sharp wave ripples. That it's oscillatory. That it neither uses symbols nor represents. That words are wholly external to what we call thought.
The authors deal with molecules which are neither arbitrary nor specific. Yet tumors ARE specific, while words are wholly arbitrary. Knowing these things should offer a deep suspicion of ML/LLMs. They have so little to do with how brains work and the units brains actually use (all oscillation is specific, all stats emerge from arbitrary symbols and worse: metaphors) that mistaking LLMs for reasoning/inference is less lexemic hallucination and more eugenic. | | |
| ▲ | quantummagic 6 days ago | parent | next [-] | | What do you think about the idea that LLMs are not reasoning/inferring, but are rather an approximation of the result? Just like you yourself might have to spend some effort reasoning, on how a plant grows, in order to answer questions about that subject. When asked, you wouldn't replicate that reasoning, instead you would recall the crystallized representation of the knowledge you accumulated while previously reasoning/learning. The "thinking" in the process isn't modelled by the LLM data, but rather by the code/strategies used to iterate over this crystallized knowledge, and present it to the user. | | |
| ▲ | mallowdram 6 days ago | parent [-] | | This is toughest part. We need some kind of analog external that concatenates. It's software, but not necessarily binary, it uses topology to express that analog. It somehow is visual, ie you can see it, but at the same time, it can be expanded specifically into syntax, which the details of are invisible. Scale invariance is probably key. |
| |
| ▲ | Zigurd 6 days ago | parent | prev | next [-] | | "That words are wholly external to what we call thought." may be what we should learn, or at least hypothesize, based on what we see LLMs doing. I'm disappointed that AI isn't more of a laboratory for understanding brain architecture, and precisely what is this thing called thought. | | |
| ▲ | mallowdram 6 days ago | parent [-] | | The question is how to model the irreducible. And then to concatenate between spatiotemporal neuroscience (the oscillators) and neural syntax (what's oscillating) and add or subtract what the fields are doing to bind that to the surroundings. |
| |
| ▲ | suddenlybananas 6 days ago | parent | prev [-] | | We don't know those things about the brain. I don't know why you keep going around HN making wildly false claims about the state of contemporary neuroscience. We know very very little about how higher order cognition works in the brain. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | Of course we know these things about the brain, and who said anything about higher order cognition? I'd stay current, you seem to be a legacy thinker.
I'll needle drop ONE of the references re: unpredictability and brain health, there are about 30, just to keep you in your corner. The rest you'll have to hunt down, but please stop pretending you know what you're talking about. Your line of attack which is to dismiss from a pretend point of certainty, rather than inquiry and curiosity, seems indicative of the cog-sci/engineering problem in general. There's an imposition based in intuition/folk psychology that suffuses the industry. The field doesn't remain curious to new discoveries in neurobiology, which supplants psychology (psychology is being based, neuro is neural based). What this does is remove the intent of rhetoric/being and suggest brains built our external communication. The question is how and by what regularities. Cog-sci has no grasp of that in the slightest. https://pubmed.ncbi.nlm.nih.gov/38579270/ | | |
|
| |
| ▲ | Certhas 6 days ago | parent | prev | next [-] | | We didn't but somebody did so it's possible so probabilistic dynamics in high enough dimensions can do it. We don't understand what LLMs are doing. You can't go from understanding what a transformer is to understanding what an LLM does any more than you can go from understanding what a Neuron is to what a brain does. | |
| ▲ | jjgreen 6 days ago | parent | prev [-] | | You can look at it, from the inside. |
|
|
| ▲ | patrick451 6 days ago | parent | prev [-] |
| > Either way, I can get arbitrarily good approximations of arbitrary nonlinear differential/difference equations using only linear probabilistic evolution at the cost of a (much) larger state space. This is impossible. When driven by a sinusoid, a linear system will only ever output a sinusoid with exactly the same frequency but a different amplitude and phase regardless of how many states you give it. A non-linear system can change the frequency or output multiple frequencies. |
| |
| ▲ | diffeomorphism 6 days ago | parent [-] | | As far as I understand, the terminology says "linear" but means compositions of affine (with cutoffs etc). That gives you arbitrary polynomials and piecewise affine, which are dense in most classes of interest. Of course, in practice you don't actually get arbitrary degree polynomials but some finite degree, so the approximation might still be quite bad or inefficient. |
|