| |
| ▲ | ctoth 5 hours ago | parent | next [-] | | A small child's cognition is also "just" electrochemical signals propagating through neural tissue according to physical laws! The "just" is doing all the lifting. You can reductively describe any information processing system in a way that makes it sound like it couldn't possibly produce the outputs it demonstrably produces. "The sun is just hydrogen atoms bumping into each other" is technically accurate and completely useless as an explanation of solar physics. | | |
| ▲ | anonymous908213 5 hours ago | parent | next [-] | | You are making a point that is in favor of my argument, not against it. I make the same argument as you do routinely against people trying to over-simplify things. LLM hypists frequently suggest that because brain activity is "just" electrochemical signals, there is no possible difference between an LLM and a human brain. This is, obviously, tremendously idiotic. I do believe it is within the realm of possibility to create machine intelligence; I don't believe in a magic soul or some other element that make humans inherently special. However, if you do not engage in overt reductionism, the mechanism by which these electrochemical signals are generated is completely and totally different from the signals involved in an LLM's processing. Human programming is substantially more complex, and it is fundamentally absurd to think that our biological programming can be reduced to conveniently be exactly equivalent to the latest fad technology and assume that we've solved the secret to programming a brain, despite the programs we've written performing exactly according to their programming and no greater. Edit: Case in point, a mere 10 minutes later we got someone making that exact argument in a sibling comment to yours! Nature is beautiful. | |
| ▲ | emp17344 5 hours ago | parent | prev [-] | | > A small child's cognition is also "just" electrochemical signals propagating through neural tissue according to physical laws! This is a thought-terminating cliche employed to avoid grappling with the overwhelming differences between a human brain and a language model. |
| |
| ▲ | mikepurvis 2 hours ago | parent | prev | next [-] | | Short term memory is the context window, and it's a relatively short hop from the current state of affairs to here's an MCP server that gives you access to a big queryable scratch space where you can note anything down that you think might be important later, similar to how current-gen chatbots take multiple iterations to produce an answer; they're clearly not just token-producing right out of the gate, but rather are using an internal notepad to iteratively work on an answer for you. Or maybe there's even a medium term scratchpad that is managed automatically, just fed all context as it occurs, and then a parallel process mulls over that content in the background, periodically presenting chunks of it to the foreground thought process when it seems like it could be relevant. All I'm saying is there are good reasons not to consider current LLMs to be AGI, but "doesn't have long term memory" is not a significant barrier. | |
| ▲ | pfisch 4 hours ago | parent | prev | next [-] | | Yes. I also don't think it is realistic to pretend you understand how frontier LLMs operate because you understand the basic principles of how the simple LLMs worked that weren't very good. Its even more ridiculous than me pretending I understand how a rocket ship works because I know there is fuel in a tank and it gets lit on fire somehow and aimed with some fins on the rocket... | | |
| ▲ | anonymous908213 4 hours ago | parent [-] | | The frontier LLMs have the same overall architecture as earlier models. I absolutely understand how they operate. I have worked in a startup wherein we heavily finetuned Deepseek, among other smaller models, running on our own hardware. Both Deepseek's 671b model and a Mistral 7b model operate according to the exact same principles. There is no magic in the process, and there is zero reason to believe that Sonnet or Opus is on some impossible-to-understand architecture that is fundamentally alien to every other LLM's. |
| |
| ▲ | jvidalv 5 hours ago | parent | prev | next [-] | | What is the definition for intelligence? | | |
| ▲ | anonymous908213 5 hours ago | parent [-] | | Quoting an older comment of mine... Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.
Our computer programs execute logic, but cannot reason about it. Reasoning is the ability to dynamically consider constraints we've never seen before and then determine how those constraints would lead to a final conclusion. The rules of mathematics we follow are not programmed into our DNA; we learn them and follow them while our human-programming is actively running. But we can just as easily, at any point, make up new constraints and follow them to new conclusions. What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
| | |
| ▲ | coldtea 4 hours ago | parent | next [-] | | >Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. This is not even wrong. >Probabilistic prediction is inherently incompatible with deterministic deduction. And his is just begging the question again. Probabilistic prediction could very well be how we do deterministic deduction - e.g. about how strong the weights and how hot the probability path for those deduction steps are, so that it's followed every time, even if the overall process is probabilistic. Probabilistic doesn't mean completely random. | | |
| ▲ | runarberg 4 hours ago | parent [-] | | At the risk of explaining the insult: https://en.wikipedia.org/wiki/Not_even_wrong Personally I think not even wrong is the perfect description of this argumentation. Intelligence is extremely scientifically fraught. We have been doing intelligence research for over a century and to date we have very little to show for it (and a lot of it ended up being garbage race science anyway). Most attempts to provide a simple (and often any) definition or description of intelligence end up being “not even wrong”. |
| |
| ▲ | famouswaffles 4 hours ago | parent | prev [-] | | >Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. Human Intelligence is clearly not logic based so I'm not sure why you have such a definition. >and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call. One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong. >What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules. Good thing LLMs can handle this just fine I guess. Your entire comment perfectly encapsulates why symbolic AI failed to go anywhere past the initial years. You have a class of people that really think they know how intelligence works, but build it that way and it fails completely. | | |
| ▲ | anonymous908213 4 hours ago | parent [-] | | > One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong. They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286 > Good thing LLMs can handle this just fine I guess. LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly. They can't even play Chess or Poker without breaking the rules despite those being extremely well-represented in the dataset already, nevermind a made-up set of logical rules. | | |
| ▲ | famouswaffles 4 hours ago | parent [-] | | >They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286 I thought we were talking about actual arithmetic not silly puzzles, and there are many human adults that would fail this, nevermind children. >LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly. Even if that were true (Have you actually tried?), You do realize many humans would also fail once you did all that right ? >They can't even reliably play Chess or Poker without breaking the rules despite those extremely well-represented in the dataset already, nevermind a made-up set of logical rules. LLMs can play chess just fine (99.8 % legal move rate, ~1800 Elo) https://arxiv.org/abs/2403.15498 https://arxiv.org/abs/2501.17186 https://github.com/adamkarvonen/chess_gpt_eval | | |
| ▲ | runarberg 3 hours ago | parent [-] | | I still have not been convinced otherwise that LLMs are just super fancy (and expensive) curve fitting algorithms. I don‘t like to throw the word intelligence around, but when we talk about intelligence we are usually talking about human behavior. And there is nothing human about being extremely good at curve fitting in multi parametric space. |
|
|
|
|
| |
| ▲ | nurettin 3 hours ago | parent | prev | next [-] | | Intelligence is about acquiring and utilizing knowledge. Reasoning is about making sense of things. Words are concatenations of letters that form meaning. Inference is tightly coupled with meaning which is coupled with reasoning and thus, intelligence. People are paying for these monthly subscriptions to outsource reasoning, because it works. Half-assedly and with unnerving failure modes, but it works. What you probably mean is that it is not a mind in the sense that it is not conscious. It won't cringe or be embarrassed like you do, it costs nothing for an LLM to be awkward, it doesn't feel weird, or get bored of you. Its curiosity is a mere autocomplete. But a child will feel all that, and learn all that and be a social animal. | |
| ▲ | coldtea 5 hours ago | parent | prev [-] | | >The LLM is predicting what tokens come next, based on a bunch of math operations performed over a huge dataset. Whereas the child does what exactly, in your opinion? You know the child can just as well to be said to "just do chemical and electrical exchanges" right? | | |
| ▲ | jazzyjackson 4 hours ago | parent | next [-] | | Okay but chemical and electrical exchanges in an body with a drive to not die is so vastly different than a matrix multiplication routine on a flat plane of silicon The comparison is therefore annoying | | |
| ▲ | JoshTriplett 4 hours ago | parent [-] | | Intelligence does not require "chemical and electrical exchanges in an body". Are you attempting to axiomatically claim that only biological beings can be intelligent (in which case, that's not a useful definition for the purposes of this discussion)? If not, then that's a red herring. "Annoying" does not mean "false". | | |
| ▲ | jazzyjackson 2 hours ago | parent [-] | | No I'm not making claims about intelligence, I'm making claims about the absurdity of comparing biological systems with silicon arrangements. |
|
| |
| ▲ | anonymous908213 4 hours ago | parent | prev [-] | | At least read the other replies that pre-emptively refuted this drivel before spamming it. | | |
| ▲ | coldtea 4 hours ago | parent [-] | | At least don't be rude. They refuted nothing of the short. Just banged the same circular logic drum. | | |
| ▲ | anonymous908213 4 hours ago | parent [-] | | There is an element of rudeness to completely ignoring what I've already written and saying "you know [basic principle that was already covered at length], right?". If you want to talk about contributing to the discussion rather than being rude, you could start by offering a reply to the points that are already made rather than making me repeat myself addressing the level 0 thought on the subject. | | |
| ▲ | JoshTriplett 4 hours ago | parent [-] | | Repeating yourself doesn't make you right, just repetitive. Ignoring refutations you don't like doesn't make them wrong. Observing that something has already been refuted, in an effort to avoid further repetition, is not in itself inherently rude. Any definition of intelligence that does not axiomatically say "is human" or "is biological" or similar is something a machine can meet, insofar as we're also just machines made out of biology. For any given X, "AI can't do X yet" is a statement with an expiration date on it, and I wouldn't bet on that expiration date being too far in the future. This is a problem. It is, in particular, difficult at this point to construct a meaningful definition of intelligence that simultaneously includes all humans and excludes all AIs. Many motivated-reasoning / rationalization attempts to construct a definition that excludes the highest-end AIs often exclude some humans. (By "motivated-reasoning / rationalization", I mean that such attempts start by writing "and therefore AIs can't possibly be intelligent" at the bottom, and work backwards from there to faux-rationalize what they've already decided must be true.) | | |
| ▲ | anonymous908213 4 hours ago | parent [-] | | > Repeating yourself doesn't make you right, just repetitive. Good thing I didn't make that claim! > Ignoring refutations you don't like doesn't make them wrong. They didn't make a refutation of my points. They asserted a basic principle that I agreed with, but assume acceptance of that principle leads to their preferred conclusion. They make this assumption without providing any reasoning whatsoever for why that principle would lead to that conclusion, whereas I already provided an entire paragraph of reasoning for why I believe the principle leads to a different conclusion. A refutation would have to start from there, refuting the points I actually made. Without that you cannot call it a refutation. It is just gainsaying. > Any definition of intelligence that does not axiomatically say "is human" or "is biological" or similar is something a machine can meet, insofar as we're also just machines made out of biology. And here we go AGAIN! I already agree with this point!!!!!!!!!!!!!!! Please, for the love of god, read the words I have written. I think machine intelligence is possible. We are in agreement. Being in agreement that machine intelligence is possible does not automatically lead to the conclusion that the programs that make up LLMs are machine intelligence, any more than a "Hello World" program is intelligence. This is indeed, very repetitive. | | |
| ▲ | JoshTriplett 4 hours ago | parent [-] | | You have given no argument for why an LLM cannot be intelligent. Not even that current models are not; you seem to be claiming that they cannot be. If you are prepared to accept that intelligence doesn't require biology, then what definition do you want to use that simultaneously excludes all high-end AI and includes all humans? By way of example, the game of life uses very simple rules, and is Turing-complete. Thus, the game of life could run a (very slow) complete simulation of a brain. Similarly, so could the architecture of an LLM. There is no fundamental limitation there. | | |
| ▲ | anonymous908213 4 hours ago | parent [-] | | > You have given no argument for why an LLM cannot be intelligent. I literally did provide a definition and my argument for it already: https://news.ycombinator.com/item?id=47051523 If you want to argue with that definition of intelligence, or argue that LLMs do meet that definition of intelligence, by all means, go ahead[1]! I would have been interested to discuss that. Instead I have to repeat myself over and over restating points I already made because people aren't even reading them. > Not even that current models are not; you seem to be claiming that they cannot be. As I have now stated something like three or four times in this thread, my position is that machine intelligence is possible but that LLMs are not an example of it. Perhaps you would know what position you were arguing against if you had fully read my arguments before responding. [1] I won't be responding any further at this point, though, so you should probably not bother. My patience for people responding without reading has worn thin, and going so far as to assert I have not given an argument for the very first thing I made an argument for is quite enough for me to log off. | | |
| ▲ | JoshTriplett 3 hours ago | parent [-] | | > Probabilistic prediction is inherently incompatible with deterministic deduction. Human brains run on probabilistic processes. If you want to make a definition of intelligence that excludes humans, that's not going to be a very useful definition for the purposes of reasoning or discourse. > What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules. Have you tried this particular test, on any recent LLM? Because they have no problem handling that, and much more complex problems than that. You're going to need a more sophisticated test if you want to distinguish humans and current AI. I'm not suggesting that we have "solved" intelligence; I am suggesting that there is no inherent property of an LLM that makes them incapable of intelligence. |
|
|
|
|
|
|
|
|
|