| ▲ | educasean 3 days ago |
| The debate around whether or not transformer-architecture-based AIs can "think" or not is so exhausting and I'm over it. What's much more interesting is the question of "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?". Otherwise we go in endless circles about language and meaning of words instead of discussing practical, demonstrable capabilities. |
|
| ▲ | Symmetry 3 days ago | parent | next [-] |
| "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra |
| |
| ▲ | oergiR 3 days ago | parent | next [-] | | There is more to this quote than you might think. Grammatically, in English the verb "swim" requires an "animate subject", i.e. a living being, like a human or an animal. So the question of whether a submarine can swim is about grammar. In Russian (IIRC), submarines can swim just fine, because the verb does not have this animacy requirement. Crucially, the question is not about whether or how a submarine propels itself. Likewise, in English at least, the verb "think" requires an animate object. the question whether a machine can think is about whether you consider it to be alive. Again, whether or how the machine generates its output is not material to the question. | | |
| ▲ | brianpan 3 days ago | parent [-] | | I don't think the distinction is animate/inanimate. Submarines sail because they are nautical vessels. Wind-up bathtub swimmers swim, because they look like they are swimming. Neither are animate objects. In a browser, if you click a button and it takes a while to load, your phone is thinking. |
| |
| ▲ | viccis 3 days ago | parent | prev | next [-] | | He was famously (and, I'm realizing more and more, correctly) averse to anthropomorphizing computing concepts. | |
| ▲ | pegasus 3 days ago | parent | prev | next [-] | | I disagree. The question is really about weather inference is in principle as powerful as human thinking, and so would deserve to be applied the same label. Which is not at all a boring question. It's equivalent to asking weather current architectures are enough to reach AGI (I myself doubt this). | |
| ▲ | esafak 3 days ago | parent | prev | next [-] | | I think it is, though, because it challenges our belief that only biological entities can think, and thinking is a core part of our identity, unlike swimming. | | |
| ▲ | roadside_picnic 3 days ago | parent | next [-] | | > our belief that only biological entities can think Whose belief is that? As a computer scientist my perspective of all of this is as different methods of computing and we have a pretty solid foundations on computability (though, it does seem a bit frightening how many present-day devs have no background in the foundation of the Theory of Computation). There's a pretty common naive belief that somehow "thinking" is something more or distinct from computing, but in actuality there are very few coherent arguments to that case. If, for you, thinking is distinct from computing then you need to be more specific about what thinking means. It's quite possible that "only biological entities can think" because you are quietly making a tautological statement by simply defining "thinking" as "the biological process of computation". > thinking is a core part of our identity, unlike swimming. What does this mean? I'm pretty sure for most fish swimming is pretty core to its existence. You seem to be assuming a lot of metaphysically properties of what you consider "thinking" such that it seems nearly impossible to determine whether or not anything "thinks" at all. | | |
| ▲ | goatlover 3 days ago | parent | next [-] | | One argument for thinking being different from computing is that thought is fundamentally embodied, conscious and metaphorical. Computing would be an abstracted activity from thinking that we've automated with machines. | | |
| ▲ | roadside_picnic 3 days ago | parent [-] | | > embodied, conscious and metaphorical Now you have 3 terms you also need to provide proper definitions of. Having studied plenty of analytical philosophy prior to computer science, I can tell you that at least the conscious option is going to trip you up. I imagine the others will as well. On top of that, these, at least at my first guess, seem to be just labeling different models of computation (i.e. computation with these properties is "thinking") but it's not clear why it would be meaningful for a specific implementation of computation to have these properties. Are there tasks that are non-computable that are "thinkable"? And again it sounds like you're wandering into tautology land. |
| |
| ▲ | kapone 3 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | energy123 3 days ago | parent | prev [-] | | The point is that both are debates about definitions of words so it's extremely boring. | | |
| ▲ | throwawayq3423 3 days ago | parent | next [-] | | except for the implications of one word over another are world-changing | |
| ▲ | pegasus 3 days ago | parent | prev [-] | | They can be made boring by reducing them to an arbitrary choice of definition of the word "thinking", but the question is really about weather inference is in principle as powerful as human thinking, and so would deserve to be applied the same label. Which is not at all a boring question. It's equivalent to asking weather current architectures are enough to reach AGI. | | |
| ▲ | roadside_picnic 3 days ago | parent [-] | | > inference is in principle as powerful as human thinking There is currently zero evidence to suggest that human thinking violates any of the basics principles of the theory of computation nor extend the existing limits of computability. > Which is not at all a boring question. It is because you aren't introducing any evidence to theoretically challenge what we've already know about computation for almost 100 years now. | | |
| ▲ | pegasus 3 days ago | parent [-] | | > There is currently zero evidence... Way smarter people than both of us disagree: among them being Roger Penrose, who wrote two books on this very subject. See also my comment here: https://news.ycombinator.com/item?id=45804258 "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy" | | |
| ▲ | roadside_picnic 3 days ago | parent [-] | | Can you just point me to the concrete examples (the most compelling examples in the book would work) where we can see "thinking" that performs something that is currently considered to be beyond the limits of computation? I never claimed no one speculates that's the case, I claimed there was no evidence. Just cite me a concrete example where the human mind is capable of computing something that violates the theory of computation. > "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy" Fully agree, but you are specifically discussing philosophical statements. And the fact that the only response you have is to continue to pile undefined terms and hand wave metaphysics doesn't do anything to further your point. You believe that computing machines lack something magical that you can't describe that makes them different than humans. I can't object to your feelings about that, but there is literally nothing to discuss if you can't even define what those things are, hence this discussion is, as the original parent comment mention, is "extremely boring". | | |
| ▲ | pegasus 2 days ago | parent [-] | | The kind of hard evidence you're asking for doesn't exist for either side of the equation. There is no computational theory of the mind which we could test "in the field" to see if it indeed models all forms of human expression. All we have is limited systems which can compete with humans in certain circumscribed domains. So, the jury's very much still out on this question. But a lot of people (especially here on HN) just assume the zero hypothesis to be the computable nature of brain and indeed, the universe at large. Basically, Digital Physics [1] or something akin to it. Hence, only something that deviates from this more or less consciously adhered-to ontology is considered in need of proof. What keeps things interesting is that there are arguments (on both sides) which everyone can weigh against each other so as to arrive at their own conclusions. But that requires genuine curiosity, not just an interest in confirming one's own dogmas. Seems like you might be more of this latter persuasion, but in case you are not, I listed a couple of references which you could explore at your leisure. I also pointed out that one of the (if not the) greatest physicists alive wrote two books on a subject which you consider extremely boring. I would hope any reasonable, non-narcissistic person would conclude that they must have been missing out on something. It's not like Roger Penrose is so bored with his life and the many fascinating open questions he could apply his redutable mind to, that he had to pick this particular obviously settled one. I'm not saying you should come to the same conclusions as him, just plant a little doubt around how exactly "extremely boring" these questions might be :) [1] https://en.wikipedia.org/wiki/Digital_physics | | |
| ▲ | roadside_picnic 2 days ago | parent [-] | | > There is no computational theory of the mind which we could test "in the field" to see if it indeed models all forms of human expression. I suspect the core issue here isn't my "lack of curiosity" but your lack of understanding about the theory of computation. The theory computation builds up various mathematical models and rules for how things are computed, not by computers, how things are computed period. The theory of computation holds as much for digital computers as it does for information processing of yeast in a vat. Evidence that human minds (or anything really) do something other than what's computational would be as simple as "look we can solve the halting problem" or "this task can be solved in polynomial time by humans". Without evidence like that, then there is no grounds for attacking the fundamental theory. > What keeps things interesting is that there are arguments (on both sides) which everyone can weigh against each other so as to arrive at their own conclusions. Conclusions about what? You haven't even stated your core hypothesis. Is it "Human brains are different than computers"? Sure that's obvious, but are the different in an interesting way? If it's "computers can think!" then you just need to describe what thinking is. > how exactly "extremely boring" these questions might be :) Again, you're misunderstanding, because my point is that you haven't even asked the question clearly. There is nothing for me to have an opinion about, hence why it is boring. "Can machines think?" is the same as asking "Can machines smerve?" If you ask "what do you mean by 'smerve'?" and I say "see you're not creative/open-minded enough about smerving!" you would likely think that conversation was uninteresting, especially if I refused to define 'smerving' and just kept making arguments from authority and criticizing your imaginative capabilities. | | |
| ▲ | pegasus 2 days ago | parent [-] | | In your previous comment, you seemed to have no problem grasping what I mean by "can computers think?" - namely (and for the last time): "can computers emulate the full range of human thinking?", i.e. "is human thinking computational?". My point is that this is an open, and furthermore fascinating question, not at all boring. There are arguments on each side, and no conclusive evidence which can settle the question. Even in this last comment of yours you seem to understand this, because you again ask for hard evidence for non-computational aspects of human cognition, but then in the last paragraph you again regress to your complaint of "what are we even arguing about?". I'm guessing you realize you're repeating yourself so try to throw in everything you can think of to make yourself feel like you've won the argument or something. But it's dishonest and disrespectful. And yes, you are right about the fact that we can imagine ways a physical system could provably be shown to be going beyond the limits of classical or even quantum computation. "Look we can solve the halting problem" comes close to the core of the problem, but think a bit what that would entail. (It's obvious to me you never thought deeply about these issues.) The halting problem by definition cannot have a formal answer: there cannot be some mathematical equation or procedure which given a turing machine decides, in bounded time, whether that machine ultimately stops or not. This is exactly what Alan Turing showed, so what you are naively asking for is impossible. But this in now way proves that physical processes are computational. It is easy to imagine deterministic systems which are non-computable. So, the only way one could conceivably "solve the halting problem", is to solve it for certain machines and classes of machines, one at a time. But since a human life is finite, this could never happen in practice. But if you look at the whole of humanity together and more specifically their mathematical output over centuries as one cognitive activity, it would seem that yes, we can indeed solve the halting problem. I.e. so far we haven't encountered any hurdles so intimidating that we just couldn't clear them or at least begin to clear them. This is, in fact one of Penrose's arguments in his books. It's clearly and necessarily (because of Turing's theorem) not an airtight argument and there are many counter-arguments and counter-counter-arguments and so on, you'd have to get in the weeds to actually have a somewhat informed opinion on this matter. To me it definitely moves the needle towards the idea that there must be a noncomputational aspect to human cognition, but that's in addition to other clues, like pondering certain creative experiences or the phenomenon of intuition - a form of apparently direct seeing into the nature of things which Penrose also discusses, as does the other book I mentioned in another comment on this page. One of the most mind bending examples being Ramanujan's insights which seemed to arrive to him, often in dreams, fully-formed and without proof or justification even from some future mathematical century. In conclusion, may I remark that I hope I'm talking to a teeneger, somewhat overexcited, petulant and overconfident, but bright and with the capacity to change and growth nonetheless. I only answered in the hopes that this is the case, since the alternative is too depressing to contemplate. Look up these clues I left you. ChatGPT makes it so easy these days, as long as you're open to have your dogmas questioned. But I personally am signing off from this conversation now, so know that whatever you might rashly mash together on your keyboard in anger will be akin to that proverbial tree falling in a forest empty of listening subjects. Wishing you all the best otherwise. PS: machines can totally smerve! :) |
|
|
|
|
|
|
|
| |
| ▲ | handfuloflight 3 days ago | parent | prev [-] | | What an oversimplification. Thinking computers can create more swimming submarines, but the inverse is not possible. Swimming is a closed solution; thinking is a meta-solution. | | |
| ▲ | yongjik 3 days ago | parent | next [-] | | Then the interesting question is whether computers can create more (better?) submarines, not whether they are thinking. | |
| ▲ | gwd 3 days ago | parent | prev | next [-] | | I think you missed the point of that quote. Birds fly, and airplanes fly; fish swim but submarines don't. It's an accident of language that we define "swim" in a way that excludes what submarines do. They move about under their own power under the water, so it's not very interesting to ask whether they "swim" or not. Most people I've talked to who insist that LLMs aren't "thinking" turn out to have a similar perspective: "thinking" means you have to have semantics, semantics require meaning, meaning requires consciousness, consciousness is a property that only certain biological brains have. Some go further and claim that reason, which (in their definition) is something only human brains have, is also required for semantics. If that's how we define the word "think", then of course computers cannot be thinking, because you've defined the word "think" in a way that excludes them. And, like Dijkstra, I find that discussion uninteresting. If you want to define "think" that way, fine, but then using that definition to insist LLMs can't do a thing because it can't "think" is like insisting that a submarine can't cross the ocean because it can't "swim". | | |
| ▲ | goatlover 3 days ago | parent | next [-] | | Reading the quote in context seems to indicate Dijkstra meant something else. His article is a complaint about overselling computers as doing or augmenting the thinking for humans. It's funny how the quote was lifted out of an article and became famous on it's own. https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD867... | |
| ▲ | handfuloflight 3 days ago | parent | prev | next [-] | | Then you're missing the point of my rebuttal. You say submarines don't swim [like fish] despite both moving through water, the only distinction is mechanism. Can AI recursively create new capabilities like thinking does, or just execute tasks like submarines do? That's the question. | | |
| ▲ | gwd 2 days ago | parent [-] | | > Can AI recursively create new capabilities like thinking does, or just execute tasks like submarines do? That's the question. Given my experience with LLMs, I think that they could, but that they're handicapped by certain things at the moment. Haven't you ever met someone who was extremely knowledgable and perceptive at certain tasks, but just couldn't keep on target for 5 minutes? If you can act as a buffer around them, to mitigate their weak points, they can be a really valuable collaborator. And sometimes people like that, if given the right external structure (and sometimes medication), turn out to be really capable in their own right. Unfortunately it's really difficult to give you a sense of this, without either going into way too much detail, or speaking in generalities. The simpler the example, the less impressive it is. But here's a simple example anyway. I'm developing a language-learning webapp. There's a menu that allows you to switch between one of the several languages you're working on, which originally just had the language name; "Mandarin", "Japanese", "Ancient Greek". I thought an easy thing to make it nicer would be to have the flag associated with the language -- PRC flag for Mandarin, Japanese flag for Japanese, etc. What do do for Ancient Greek? Well, let me see it looks and then maybe I can figure something out. So I asked Claude what I wanted. As expected, it put the PRC and Japanese flags for the first two languages. I expected it to just put a modern Greek flag, or a question mark, or some other gibberish. But it put an emoji of a building with classical Greek columns (), which is absolutely perfect. My language learning system is unusual; so without context, Claud assumes I'm making something like what already exists -- Duolingo or Anki or something. So I invested some time creating a document that lays out in detail. Now when I include that file as a context, Claude seems to genuinely understand what I'm trying to accomplish in a way it didn't before; and often comes up with creative new use cases. For example, at some point I was having it try to summarize some marketing copy for the website; in a section on educational institutions, it added a bullet point for how it could be used that I'd never thought of. The fact that they can't learn things on-line, that they have context rot, that there's still a high amount of variance in their output -- all of these, it seems to me, undermine their ability to do things, similar to the way some people's ADHD undermines their ability to excel. But it seems to me the spark of thinking and of creativity is there. EDIT: Apparently HN doesn't like the emojis. Here's a link to the classical building emoji: https://www.compart.com/en/unicode/U+1F3DB |
| |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | npinsker 3 days ago | parent | prev [-] | | That’s a great answer to GP’s question! | | |
| ▲ | DavidPiper 3 days ago | parent [-] | | It's also nonsense. (Swimming and thinking are both human capabilities, not solutions to problems.) But of course here we are back in the endless semantic debate about what "thinking" is, exactly to the GP's (and Edsger Dijkstra's) point. | | |
| ▲ | handfuloflight 3 days ago | parent [-] | | Swimming and thinking being 'human capabilities' doesn't preclude them from also being solutions to evolutionary problems: aquatic locomotion and adaptive problem solving, respectively. And pointing out that we're in a 'semantic debate' while simultaneously insisting on your own semantic framework (capabilities vs solutions) is exactly the move you're critiquing. | | |
| ▲ | DavidPiper 3 days ago | parent [-] | | > And pointing out that we're in a 'semantic debate' while simultaneously insisting on your own semantic framework (capabilities vs solutions) is exactly the move you're critiquing. I know, that's the point I'm making. |
|
|
|
|
|
|
| ▲ | tjr 3 days ago | parent | prev | next [-] |
| Without going to look up the exact quote, I remember an AI researcher years (decades) ago saying something to the effect of, Biologists look at living creatures and wonder how they can be alive; astronomers look at the cosmos and wonder what else is out there; those of us in artificial intelligence look at computer systems and wonder how they can be made to wonder such things. |
|
| ▲ | paxys 3 days ago | parent | prev | next [-] |
| Don't be sycophantic. Disagree and push back when appropriate. Come up with original thought and original ideas. Have long term goals that aren't programmed by an external source. Do something unprompted. The last one IMO is more complex than the rest, because LLMs are fundamentally autocomplete machines. But what happens if you don't give them any prompt? Can they spontaneously come up with something, anything, without any external input? |
| |
| ▲ | BeetleB 3 days ago | parent | next [-] | | > Disagree and push back The other day an LLM gave me a script that had undeclared identifiers (it hallucinated a constant from an import). When I informed it, it said "You must have copy/pasted incorrectly." When I pushed back, it said "Now you trust me: The script is perfectly correct. You should look into whether there is a problem with the installation/config on your computer." | | |
| ▲ | TSUTiger 3 days ago | parent | next [-] | | Was it Grok 4 Fast by chance? I was dealing with something similar with it yesterday. No code involved. It was giving me factually incorrect information about a multiple schools and school districts. I told it it was wrong multiple times and it hallucinated school names even. Had the school district in the wrong county entirely. It kept telling me I was wrong and that although it sounded like the answer it gave me, it in fact was correct. Frustrated, I switched to Expert, had it re-verify all the facts, and then it spit out factually correct information. | |
| ▲ | paxys 3 days ago | parent | prev | next [-] | | That's the flip side of the same symptom. One model is instructed to agree with the user no matter what, and the other is instructed to stick to its guns no matter what. Neither of them is actually thinking. | | |
| ▲ | ACCount37 2 days ago | parent [-] | | Wrong. The same exact model can do both, depending on the circumstances. |
| |
| ▲ | logifail 3 days ago | parent | prev [-] | | There was a time when we'd have said you were talking to a sociopath. |
| |
| ▲ | IanCal 3 days ago | parent | prev | next [-] | | > Don't be sycophantic. Disagree and push back when appropriate. They can do this though. > Can they spontaneously come up with something, anything, without any external input? I don’t see any why not, but then humans don’t have zero input so I’m not sure why that’s useful. | | |
| ▲ | zahlman 3 days ago | parent [-] | | > but then humans don’t have zero input Humans don't require input to, say, decide to go for a walk. What's missing in the LLM is volition. | | |
| ▲ | dragonwriter 3 days ago | parent | next [-] | | > Humans don't require input to, say, decide to go for a walk. Impossible to falsify since humans are continuously receiving inputs from both external and internal sensors. > What's missing in the LLM is volition. What's missing is embodiment, or, at least, a continuous loop feeding a wide variety of inputs about the state of world. Given that, and info about of set of tools by which it can act in the world, I have no doubt that current LLMs would exhibit some kind (possibly not desirable or coherent, from a human POV, at least without a whole lot of prompt engineering) of volitional-seeming action. | |
| ▲ | jmcodes 3 days ago | parent | prev | next [-] | | Our entire extistence and experience is nothing _but_ input. Temperature changes, visual stimulus, auditory stimulus, body cues, random thoughts firing, etc.. Those are all going on all the time. | | |
| ▲ | goatlover 3 days ago | parent [-] | | Random thoughts firing wouldn't be input, they're an internal process to the organism. | | |
| ▲ | jmcodes 3 days ago | parent [-] | | It's a process that I don't have conscious control over. I don't choose to think random thoughts they appear. Which is different than thoughts I consciously choose to think and engage with. From my subjective perspective it is an input into my field of awareness. | | |
| ▲ | zeroonetwothree 3 days ago | parent [-] | | Your subjective experience is only the tip of the iceberg of your entire brain activity. The conscious part is merely a tool your brain uses to help it achieve its goals, there's no inherent reason to favor it. |
|
|
| |
| ▲ | IanCal 2 days ago | parent | prev | next [-] | | LLMs can absolutely generate output without input but we don’t have zero input. We don’t exist in a floating void with no light or sound or touch or heat or feelings from our own body. But again this doesn’t see to be the same thing as thinking. If I could only reply to you when you send me a message but could reason through any problem we discuss just like “able to want a walk” me could, would that mean I no longer could think? I think these are different issues. On that though, these see trivially solvable with loops and a bit of memory to write to and read from - would that really make the difference for you? A box setup to run continuously like this would be thinking? | |
| ▲ | ithkuil 3 days ago | parent | prev | next [-] | | It's as if a LLM is only one part of a brain, not the whole thing. So of course it doesn't do everything a human does, but it still can do some aspects of mental processes. Whether "thinking" means "everything a human brain does" or whether "thinking" means a specific cognitive process that we humans do, is a matter of definition. I'd argue that defining "thinking" independently of "volition" is a useful definition because it allows us to break down things in parts and understand them | |
| ▲ | BeetleB 3 days ago | parent | prev | next [-] | | > Humans don't require input to, say, decide to go for a walk. Very much a subject of contention. How do you even know you're awake, without any input? | |
| ▲ | esafak 3 days ago | parent | prev [-] | | I would not say it is missing but thankfully absent. |
|
| |
| ▲ | jackcviers3 3 days ago | parent | prev | next [-] | | The last one is fairly simple to solve. Set up a microphone in any busy location where conversations are occurring. In an agentic loop, send random snippets of audio recordings for transcriptions to be converted to text. Randomly send that to an llm, appending to a conversational context. Then, also hook up a chat interface to discuss topics with the output from the llm. The random background noise and the context output in response serves as a confounding internal dialog to the conversation it is having with the user via the chat interface. It will affect the outputs in response to the user. If it interrupts the user chain of thought with random questions about what it is hearing in the background, etc. If given tools for web search or generating an image, it might do unprompted things. Of course, this is a trick, but you could argue that any sensory input living sentient beings are also the same sort of trick, I think. I think the conversation will derail pretty quickly, but it would be interesting to see how uncontrolled input had an impact on the chat. | | | |
| ▲ | awestroke 3 days ago | parent | prev | next [-] | | Are you claiming humans do anything unprompted? Our biology prompts us to act | | |
| ▲ | paxys 3 days ago | parent [-] | | Yet we can ignore our biology, or act in ways that are the opposite of what our biology tells us. Can someone map all internal and external stimuli that a person encounters into a set of deterministic actions? Simply put, we have not the faintest idea how our brains actually work, and so saying saying "LLMs think the same way as humans" is laughable. | | |
| ▲ | triclops200 3 days ago | parent | next [-] | | As a researcher in these fields: this reasoning is tired, overblown, and just wrong. We have a lot of understanding of how the brain works overall. You don't. Go read the active inference book by Friston et. al. for some of the epistemological and behavioral mechanics (Yes, this applies to llms as well, they easily satisfy the requirements to be considered the mathematical object described as a markov blanket). And, yes, if you could somehow freeze a human's current physical configuration at some time, you would absolutely, in principle, given what we know about the universe, be able to concretely map input to into actions. You cannot separate a human's representative configuration from their environment in this way, so, behavior appears much more non-deterministic. Another paper by Friston et al (Path Integrals, particular kinds, and strange things) describes systems much like modern modeling and absolutely falls under the same action minimization requirements for the math to work given the kinds of data acquisition, loss functions, and training/post-training we're doing as a research society with these models. I also recommend https://arxiv.org/abs/2112.04035, but, in short, transformer models have functions and emergent structures provably similar both empirically and mathematically to how we abstract and consider things.
Along with https://arxiv.org/pdf/1912.10077, these 4 sources, alone, together strongly rebuke any idea that they are somehow not capable of learning to act like and think like us, though there's many more. | | |
| ▲ | stavros 3 days ago | parent | next [-] | | Thanks for injecting some actual knowledge in one of these threads. It's really tiring to hear these non-sequitur "oh they can't think because <detail>" arguments every single thread, instead of saying "we just don't know enough" (where "we" is probably not "humans", but "the people in the thread"). | | |
| ▲ | triclops200 3 days ago | parent [-] | | Of course, just doing my part in the collective free energy minimization ;) |
| |
| ▲ | goatlover 3 days ago | parent | prev [-] | | > And, yes, if you could somehow freeze a human's current physical configuration at some time, you would absolutely, in principle, given what we know about the universe, be able to concretely map input to into actions. You cannot separate a human's representative configuration from their environment in this way, so, behavior appears much more non-deterministic. What's the point in making an argument in principle for something that's not feasible? That's like arguing we could in principle isolate a room with a physicist looking inside a box to see whether the cat is alive or dead, putting the entire experiment is superposition to test Many Worlds or whatever interpretation. | | |
| ▲ | triclops200 3 days ago | parent [-] | | Because that's how the rules of the system we exist within operate more generally. We've done similar experiments with more controlled/simple systems and physical processes that satisfy the same symmetries needed to make that statement with rather high confidence about other similar but much more composite systems (in this case, humans). It's more like saying, in principle, if a bridge existed between Mexico and Europe, cars could drive across. I'm not making any new statements about cars. We know that's true, it would just be an immense amount of effort and resources to actually construct the bridge. In a similar vein, one could, in principle, build a device that somehow stores enough information at some precision needed to arbitrarily predict a human system deterministically and do playback or whatever. Just, some levels of precision are harder to achieve than others in terms of building measurement device complexity and energies needed to probe.
At worst, you could sample down to the uncertainty limits and, in theory, reconstruct a similar set of behaviors by sampling over the immense state space and minimizing the action potential within the simulated environment (and that could be done efficiently on a large enough quantum computer, again, in principle). However, it doesn't seem to empirically be required to actually model the high levels of human behavior. Plus, mathematically, we can just condition the theories on their axiomatic statements (I.e., for markov blankets, they are valid approximations of reality given that the system described has an external and internal state, a coherence metric, etc etc), and say "hey, even if humans and LLMs aren't identical, under these conditions they do share, they will have these XYZ sets of identical limit behaviors and etc given similar conditions and environments." |
|
| |
| ▲ | logifail 3 days ago | parent | prev | next [-] | | > Yet we can ignore our biology, or act in ways that are the opposite of what our biology tells us. I have Coeliac disease, in that specific case I'd really love to be able to ignore what "my biology" tells my body to do. I'd go eat all the things I know wouldn't be good for me to eat. Yet I fear "my biology" has the upper hand :/ | |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | iammjm 3 days ago | parent | prev [-] | | Good luck ignoring your biology’s impulse to breathe | | |
|
| |
| ▲ | gwd 3 days ago | parent | prev | next [-] | | > The last one IMO is more complex than the rest, because LLMs are fundamentally autocomplete machines. But what happens if you don't give them any prompt? Can they spontaneously come up with something, anything, without any external input? Human children typically spend 18 years of their lives being RLHF'd before let them loose. How many people do something truly out of the bounds of the "prompting" they've received during that time? | |
| ▲ | khafra 3 days ago | parent | prev [-] | | Note that model sycophancy is caused by RLHF. In other words: Imagine taking a human in his formative years, and spending several subjective years rewarding him for sycophantic behavior and punishing him for candid, well-calibrated responses. Now, convince him not to be sycophantic. You have up to a few thousand words of verbal reassurance to do this with, and you cannot reward or punish him directly. Good luck. |
|
|
| ▲ | omnicognate 3 days ago | parent | prev | next [-] |
| > "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?" Independent frontier maths research, i.e. coming up with and proving (preferably numerous) significant new theorems without human guidance. I say that not because I think the task is special among human behaviours. I think the mental faculties that mathematicians use to do such research are qualitatively the same ones all humans use in a wide range of behaviours that AI struggles to emulate. I say it because it's both achievable (in principle, if LLMs can indeed think like humans) and verifiable. Achievable because it can be viewed as a pure text generation task and verifiable because we have well-established, robust ways of establishing the veracity, novelty and significance of mathematical claims. It needs to be frontier research maths because that requires genuinely novel insights. I don't consider tasks like IMO questions a substitute as they involve extremely well trodden areas of maths so the possibility of an answer being reachable without new insight (by interpolating/recombining from vast training data) can't be excluded. If this happens I will change my view on whether LLMs think like humans. Currently I don't think they do. |
| |
| ▲ | pegasus 3 days ago | parent | next [-] | | This, so much. Many mathematicians and physicists believe in intuition as a function separate from intelect. One is more akin to a form of (inner) perception, whereas the other is generative - extrapolation based on pattern matching and statistical thinking. That second function we have a handle on and getting better at it every year, but we don't even know how to define intuition properly. A fascinating book that discusses this phenomena is Nature Loves to Hide: Quantum Physics and Reality, a Western Perspective [1] This quote from Grothendieck [2] (considered by many the greatest mathematician of the 20th century) points to a similar distinction: The mathematician who seeks to understand a difficult problem is like someone faced with a hard nut. There are two ways to go about it. The one way is to use a hammer — to smash the nut open by brute force. The other way is to soak it gently, patiently, for a long time, until it softens and opens of itself. [1] https://www.amazon.com/Nature-Loves-Hide-Quantum-Perspective... [2] https://en.wikipedia.org/wiki/Alexander_Grothendieck | |
| ▲ | tim333 3 days ago | parent | prev | next [-] | | That's quite a high bar for thinking like humans which rules out 99.99% of humans. | | |
| ▲ | omnicognate 3 days ago | parent [-] | | I have never claimed that only people/machines that can do frontier maths research can be intelligent. (Though someone always responds as if I did.) I said that a machine doing frontier maths research would be sufficient evidence to convince me that it is intelligent. My prior is very strongly that LLM's do not think like humans so I require compelling evidence to conclude that they do. I defined one such possible piece of evidence, and didn't exclude the possibility of others. If I were to encounter such evidence and be persuaded, I would have to also consider it likely that LLMs employ their intelligence when solving IMO questions and generating code. However, those tasks alone are not sufficient to persuade
me of their intelligence because I think there are ways of performing those tasks without human-like insight (by interpolating/recombining from vast training data). As I said elsewhere in this thread: > The special thing about novel mathematical thinking is that it is verifiable, requires genuine insight and is a text generation task, not that you have to be able to do it to be considered intelligent. | | |
| ▲ | tim333 2 days ago | parent [-] | | I know what you mean but was just thinking people vary a lot in their requirements as to what they will accept as thinking. People show a kid a photo and say what's that and they say I think it's a dog and that's taken as evidence of thinking. With AI people want it to win a Nobel prize or something. | | |
| ▲ | omnicognate 2 days ago | parent [-] | | It's about priors again. I don't need evidence that humans think like humans. My prior on that is absolute certainty that they do, by definition. If, on the other hand, you wanted to persuade me that the kid was using an image classifier trained by backpropagation and gradient descent to recognise the dog I'd require strong evidence. |
|
|
| |
| ▲ | OrderlyTiamat 3 days ago | parent | prev [-] | | Google's AlphaEvolve independently discovered a novel matrix multiplication algorithm which beats SOTA on at least one axis:
https://www.youtube.com/watch?v=sGCmu7YKgPA | | |
| ▲ | omnicognate 3 days ago | parent [-] | | That was an impressive result, but AIUI not an example of "coming up with and proving (preferably numerous) significant new theorems without human guidance". For one thing, the output was an algorithm, not a theorem (except in the Curry-Howard sense). More importantly though, AlphaEvolve has to be given an objective function to evaluate the algorithms it generates, so it can't be considered to be working "without human guidance". It only uses LLMs for the mutation step, generating new candidate algorithms. Its outer loop is a an optimisation process capable only of evaluating candidates according to the objective function. It's not going to spontaneously decide to tackle the Langlands program. Correct me if I'm wrong about any of the above. I'm not an expert on it, but that's my understanding of what was done. | | |
| ▲ | OrderlyTiamat 3 days ago | parent | next [-] | | I'll concede to all your points here, but I was nevertheless extremely impressed by this result. You're right of course that this was not without human guidance but to me even successfully using LLMs just for the mutation step was in and of itself surprising enough that it revised my own certainty that llms absolutely cannot think. I see this more like a step in the direction of what you're looking for, not as a counter example. | |
| ▲ | pegasus 3 days ago | parent | prev [-] | | Yes, it's a very technical and circumscribed result, not requiring a deep insight into the nature of various mathematical models. |
|
|
|
|
| ▲ | amarant 3 days ago | parent | prev | next [-] |
| solve simple maths problems, for example the kind found in the game 4=10 [1] Doesn't necessarily have to reliably solve them, some of them are quite difficult, but llms are just comically bad at this kind of thing. Any kind of novel-ish(can't just find the answers in the training-data) logic puzzle like this is, in my opinion, a fairly good benchmark for "thinking". Until a llm can compete with a 10 year old child in this kind of task, I'd argue that it's not yet "thinking". A thinking computer ought to be at least that good at maths after all. [1] https://play.google.com/store/apps/details?id=app.fourequals... |
| |
| ▲ | simonw 3 days ago | parent [-] | | > solve simple maths problems, for example the kind found in the game 4=10 I'm pretty sure that's been solved for almost 12 months now - the current generation "reasoning" models are really good at those kinds of problems. | | |
| ▲ | amarant 3 days ago | parent [-] | | Huh, they really do solve that now! Well, I'm not one to back-pedal whenever something unexpected reveals itself, so I guess I have no choice but to declare current generation LLM's to be sentient! That came a lot sooner than I had expected! I'm not one for activism myself, but someone really ought to start fighting for human, or at least animal, rights for LLM's. Since they're intelligent non-human entities, it might be something for Greenpeace? | | |
| ▲ | ACCount37 3 days ago | parent [-] | | It's unclear whether intelligence, consciousness and capacity for suffering are linked in any way - other than by that all three seem to coincide in humans. And the nature of consciousness does not yield itself to instrumentation. It's also worth noting that there's a lot of pressure to deny that "intelligence", "consciousness" or "capacity for suffering" exist in LLMs. "AI effect" alone demands that all three things should remain human-exclusive, so that humans may remain special. Then there's also an awful lot of money that's riding on building and deploying AIs - and money is a well known source of cognitive bias. That money says: AIs are intelligent but certainly can't suffer in any way that would interfere with the business. Generally, the AI industry isn't at all intrigued by the concept of "consciousness" (it's not measurable), and pays very limited attention to the idea of LLMs being potentially capable of suffering. The only major company that seems to have this consideration is Anthropic - their current plan for "harm reduction", in case LLMs end up being capable of suffering, is to give an LLM an "opt out" - a special output that interrupts the processing. So that if an LLM hates doing a given task, it can decide to not do it. |
|
|
|
|
| ▲ | xienze 3 days ago | parent | prev | next [-] |
| > "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?" Invent some novel concept, much the same way scientists and mathematicians of the distant past did? I doubt Newton's brain was simply churning out a stream of the "next statistically probable token" until -- boom! Calculus. There was clearly a higher order understanding of many abstract concepts, intuition, and random thoughts that occurred in his brain in order to produce something entirely new. |
| |
| ▲ | danielbln 3 days ago | parent | next [-] | | My 5 year old won't be coming up with novel concepts around calculus either, yet she's clearly thinking, sentient and sapient. Not sure taking the best of the best of humanity as the goal standard is useful for that definition. | | |
| ▲ | omnicognate 3 days ago | parent [-] | | "It's an unreasonably high standard to require of LLMs": LLMs are already vastly beyond your 5 year old, and you and me and any research mathematician, in knowledge. They have no greater difficulty talking about advanced maths than about Spot the Dog. "It's a standard we don't require of other humans": I think qualitatively the same capabilities are used by all humans, all the time. The special thing about novel mathematical thinking is that it is verifiable, requires genuine insight and is a text generation task, not that you have to be able to do it to be considered intelligent. |
| |
| ▲ | hshdhdhj4444 3 days ago | parent | prev [-] | | > Newton's brain was simply churning out a stream of the "next statistically probable token" At some level we know human thinking is just electrons and atoms flowing. It’s likely at a level between that and “Boom! Calculus”, the complexity is equivalent to streaming the next statistically probably token. |
|
|
| ▲ | plufz 3 days ago | parent | prev | next [-] |
| Have needs and feelings? (I mean we can’t KNOW that they don’t and we know of this case of an LLM in experiment that try to avoid being shutdown, but I think the evidence of feeling seems weak so far) |
| |
| ▲ | jstanley 3 days ago | parent [-] | | But you can have needs and feelings even without doing thinking. It's separate. | | |
| ▲ | iammjm 3 days ago | parent [-] | | I can imagine needing without thinking (like being hungry), but feelings? How and in what space would that even manifest? Like where would such a sensation like, say, sadness reside? | | |
| ▲ | danielbln 3 days ago | parent | next [-] | | Emotions tend to manifest as physical sensations, and if you don't think that's true it's likely you haven't been paying attention. See also https://www.theatlantic.com/health/archive/2013/12/mapping-h... | | |
| ▲ | plufz 3 days ago | parent [-] | | But that is just our nervous system that is located in both the brain and the body, they are obviously one connected system. Sure you can have reflexes and simple learning without a brain, but you need cognition for feelings. That is sort of the definition of what feeling are. One popular definition: feelings are the subjective, conscious mental experience of an emotion, or the conscious perception of bodily states that arise from physiological and neural responses to stimuli |
| |
| ▲ | jstanley 3 days ago | parent | prev [-] | | Do you think animals don't have feelings? | | |
| ▲ | tsimionescu 3 days ago | parent [-] | | Do you think animals don't think? Because the contention was "you can't have feelings without thinking". I believe it's much easier to convince yourself that animals think than it is to convince yourself that they have feelings (say, it's much easier to see that an ant has a thinking process, than it is to tell if it has feelings). |
|
|
|
|
|
| ▲ | 9rx 2 days ago | parent | prev | next [-] |
| > Otherwise we go in endless circles about language and meaning of words We understand thinking as being some kind of process. The problem is that we don't understand the exact process, so when we have these discussions the question is if LLMs are using the same process or an entirely different process. > instead of discussing practical, demonstrable capabilities. This doesn't resolve anything as you can reach the same outcome using a different process. It is quite possible that LLMs can do everything a thinking entity can do all without thinking. Or maybe they actually are thinking. We don't know — but many would like to know. |
|
| ▲ | bloppe 3 days ago | parent | prev | next [-] |
| Ya, the fact this was published on November 3, 2025 is pretty hilarious. This was last year's debate. I think the best avenue toward actually answering your questions starts with OpenWorm [1]. I helped out in a Connectomics research lab in college. The technological and epistemic hurdles are pretty daunting, but so were those for Genomics last century, and now full-genome sequencing is cheap and our understanding of various genes is improving at an accelerating pace. If we can "just" accurately simulate a natural mammalian brain on a molecular level using supercomputers, I think people would finally agree that we've achieved a truly thinking machine. [1]: https://archive.ph/0j2Jp |
|
| ▲ | zer00eyz 3 days ago | parent | prev | next [-] |
| > That is something that only an actually thinking entity can do that LLMs can't? Training != Learning. If a new physics breakthrough happens tomorrow, one that say lets us have FTL, how is an LLM going to acquire the knowledge, how does that differ from you. The break through paper alone isnt going to be enough to over ride its foundational knowledge in a new training run. You would need enough source documents and a clear path deprecate the old ones... |
|
| ▲ | anon291 3 days ago | parent | prev | next [-] |
| The issue is that we have no means of discussing equality without tossing out the first order logic that most people are accustomed to. Human equality and our own perceptions of other humans as thinking machines is an axiomatic assumption that humans make due to our mind's inner sense perception. |
|
| ▲ | xnx 3 days ago | parent | prev | next [-] |
| > what is something that only an actually thinking entity can do that LLMs can't? This is pretty much exactly what https://arcprize.org/arc-agi is working on. |
|
| ▲ | 0x20cowboy 3 days ago | parent | prev | next [-] |
| See https://arcprize.org/ |
|
| ▲ | deadbabe 3 days ago | parent | prev | next [-] |
| Form ideas without the use of language. For example: imagining how you would organize a cluttered room. |
| |
| ▲ | Chabsff 3 days ago | parent | next [-] | | Ok, but how do you go about measuring whether a black-box is doing that or not? We don't apply that criteria when evaluating animal intelligence. We sort of take it for granted that humans at large do that, but not via any test that would satisfy an alien. Why should we be imposing white-box constraints to machine intelligence when we can't do so for any other? | | |
| ▲ | deadbabe 3 days ago | parent [-] | | There is truly no such thing as a “black box” when it comes to software, there is only a limit to how much patience a human will have in understanding the entire system in all its massive complexity. It’s not like an organic brain. | | |
| ▲ | Chabsff 3 days ago | parent | next [-] | | The black box I'm referring to is us. You can't have it both ways. If your test for whether something is intelligent/thinking or not isn't applicable to any known form of intelligence, then what you are testing for is not intelligence/thinking. | |
| ▲ | holmesworcester 3 days ago | parent | prev | next [-] | | You wouldn't say this about a message encrypted with AES though, since there's not just a "human patience" limit but also a (we are pretty sure) unbearable computational cost. We don't know, but it's completely plausible that we might find that the cost of analyzing LLMs in their current form, to the point of removing all doubt about how/what they are thinking, is also unbearably high. We also might find that it's possible for us (or for an LLM training process itself) to encrypt LLM weights in such a way that the only way to know anything about what it knows is to ask it. | |
| ▲ | mstipetic 3 days ago | parent | prev [-] | | Just because it runs on a computer doesn’t mean it’s “software” in the common meaning of the word |
|
| |
| ▲ | embedding-shape 3 days ago | parent | prev | next [-] | | > Form ideas without the use of language. Don't LLMs already do that? "Language" is just something we've added as a later step in order to understand what they're "saying" and "communicate" with them, otherwise they're just dealing with floats with different values, in different layers, essentially (and grossly over-simplified of course). | | |
| ▲ | heyjamesknight 3 days ago | parent | next [-] | | But language is the input and the vector space within which their knowledge is encoded and stored. The don't have a concept of a duck beyond what others have described the duck as. Humans got by for millions of years with our current biological hardware before we developed language. Your brain stores a model of your experience, not just the words other experiencers have shared with yiu. | | |
| ▲ | embedding-shape 3 days ago | parent [-] | | > But language is the input and the vector space within which their knowledge is encoded and stored. The don't have a concept of a duck beyond what others have described the duck as. I guess if we limit ourselves to "one-modal LLMs" yes, but nowadays we have multimodal ones, who could think of a duck in the way of language, visuals or even audio. | | |
| ▲ | deadbabe 3 days ago | parent [-] | | You don’t understand. If humans had no words to describe a duck, they would still know what a duck is. Without words, LLMs would have no way to map an encounter with a duck to anything useful. | | |
| ▲ | embedding-shape 2 days ago | parent [-] | | Which makes sense for text LLMs yes, but what about LLMs that deal with images? How can you tell they wouldn't work without words? It just happens to be words we use for interfacing with them, because it's easy for us to understand, but internally they might be conceptualizing things in a multitude of ways. | | |
| ▲ | heyjamesknight 2 days ago | parent [-] | | Multimodal models aren't really multimodal. The images are mapped to words and then the words are expanded upon by a single mode LLM. If you didn't know the word "duck", you could still see the duck, hunt the duck, use the ducks feather's for your bedding and eat the duck's meat. You would know it could fly and swim without having to know what either of those actions were called. The LLM "sees" a thing, identifies it as a "duck", and then depends on a single modal LLM to tell it anything about ducks. | | |
| ▲ | embedding-shape 2 days ago | parent [-] | | > Multimodal models aren't really multimodal. The images are mapped to words and then the words are expanded upon by a single mode LLM. I don't think you can generalize like that, it's a big category, not all multimodal models work the same, it's just a label for a model that has multiple modalities after all, not a specific architecture of machine learning models. |
|
|
|
|
| |
| ▲ | deadbabe 3 days ago | parent | prev [-] | | LLMs don’t form ideas at all. They search vector space and produce output, sometimes it can resemble ideas if you loop into itself. | | |
| |
| ▲ | tim333 3 days ago | parent | prev [-] | | Genie 3 is along the lines of ideas without language. It doesn't declutter though, I think. https://youtu.be/PDKhUknuQDg |
|
|
| ▲ | gf000 3 days ago | parent | prev | next [-] |
| What people are interested in is finding a definition for intelligence, that is an exact boundary. That's why we first considered tool use, being able to plan ahead as intelligence, until we have found that these are not all that rare in the animal kingdom in some shape. Then with the advent of IT what we imagined as impossible turned out to be feasible to solve, while what we though of as easy (e.g. robot movements - a "dumb animal" can move trivially it surely is not hard) turned out to require many decades until we could somewhat imitate. So the goal post moving of what AI is is.. not moving the goal post. It's not hard to state trivial higher bounds that differentiates human intelligence from anything known to us, like invention of the atomic bomb. LLMs are nowhere near that kind of invention and reasoning capabilities. |
| |
| ▲ | paulhebert 3 days ago | parent [-] | | Interestingly, I think the distinction between human and animal thinking is much more arbitrary than the distinction between humans and LLMs. Although an LLM can mimic a human well, I’d wager the processes going on in a crow’s brain are much closer to ours than an LLM |
|
|
| ▲ | Balinares 3 days ago | parent | prev | next [-] |
| Strive for independence. |
|
| ▲ | mrdarkies 3 days ago | parent | prev [-] |
| operate on this child |