| |
| ▲ | LamaOfRuin 3 days ago | parent | next [-] | | Only if you redefine "reasoning". This is something that the generative AI industry has succeeded in convincing many people of, but that doesn't mean everyone has to accede to that change. It's true that something interesting is happening. GP did not dispute that. That doesn't make it reasoning, and many people still believe that words should have meaning in order to discuss things intelligently. Language is ultimately a living thing and will inevitably change. This usually involves people fighting the change and no one know ahead of time which side will win. | | |
| ▲ | peterlk 3 days ago | parent | next [-] | | I don't think we need to redefine reasoning. Here's the definition of "reason" (the verb): "think, understand, and form judgments by a process of logic" If Claude 4 provides a detailed, logical breakdown in its "reasoning" (yeah, that usage is overloaded), then we could say that there was logical inference involved. "But wait!", I already hear someone saying, "That token output is just the result of yet another stochastic process, and isn't directing the AI in a deterministic, logical way, and thus it is not actually using logic; it's just making something that looks convincingly like logic, but is actually a hallucination of some stochastic process". And I think this is a good point, but I find it difficult to convince myself that what humans are doing is so different that we cannot use the word "reasoning". As a sidenote, I am _very_ tired of the semantic quagmire that is the current AI industry, and I would really appreciate a rigorous guide to all these definitions. | |
| ▲ | zahlman 3 days ago | parent | prev | next [-] | | > Only if you redefine "reasoning". This is something that the generative AI industry has succeeded in convincing many people of, but that doesn't mean everyone has to accede to that change. I agree. However, they can clearly do a reasonable facsimile of many things that we previously believed required reasoning to do acceptably. | | |
| ▲ | quesera 3 days ago | parent [-] | | Right -- we know that LLMs cannot think, feel, or understand. Therefore whenever they produce output that looks like the result of those things, we must either be deceived by a reasonable facsimile, or we simply misapprehended their necessity in the first place. But, do we understand the human brain as well as we understand LLMs? Obviously there's something different, but is it just a matter of degrees? LLMs have greater memory than humans, and lesser ability to correlate it. Correlation is powerful magic. That's pattern matching though, and I don't see a fundamental reason why LLMs won't get better at it. Maybe never as good as (smart) humans are, but with their superior memory, maybe that will often be adequate. | | |
| ▲ | CyberDildonics 2 days ago | parent [-] | | > they produce output that looks like the result of those things Is a cardboard cutout human to some degree? Is a recording a voice? What about a voice recording in a phone menu? > LLMs have greater memory than humans, So does a bank of hard drives by that metric. | | |
| ▲ | quesera 2 days ago | parent [-] | | (Memory Access + Correlation Skills) is a decent proxy for several of the many kinds of human intelligence. HDDs don't have correlation skills, but LLMs do. They're just not smart-human-level "good", yet. I am not sure whether I believe AGI will happen. To be meaningful, it would have to be above the level of a smart human. Building an army of disincorporated average-human-intelligence actors would be economically "productive" though. This is the future I see us trending toward today. Most humans are not special. This is dystopian, of course. Not in the "machines raise humans for energy" sort of way, but probably no less socially destructive. | | |
| ▲ | CyberDildonics 2 days ago | parent [-] | | HDDs don't have correlation skills, but LLMs do So which is it, the memory or the correlation? I'll give you a hint, this is a trick question. | | |
| ▲ | quesera 2 days ago | parent [-] | | I never suggested that it was one or the other. I think it's very obviously both. (and these two qualities are likely necessary, but not sufficient) | | |
| ▲ | CyberDildonics 2 days ago | parent [-] | | So according to you there is a threshold where someone who can't remember enough or correlate things stops being human? | | |
| ▲ | quesera 2 days ago | parent [-] | | Stops exhibiting human intelligence, on at least some of the many axes thereof, yes definitely. I feel like you're trying to gotcha me into some corner, but I'm not sure you're reading my comments fully. Or perhaps I'm being less clear than I think. I don't mean to be ungracious, but am I missing something here? | | |
| ▲ | CyberDildonics 2 days ago | parent [-] | | It's not a gotcha, I just don't think you're thinking through the implications of what you're saying when you think only in terms of being able to fake thought with statistics. | | |
| ▲ | quesera 2 days ago | parent [-] | | I'm saying that recall+correlation is sometimes enough to emulate some level of some forms of human intelligence. How frequently? How high? And which forms? These metrics are in flux. Today is very different from a few months ago. Enough to perform at the level of an ordinary retail service employee? I think this is probably within reach, soon. Do you think that's naive? |
|
|
|
|
|
|
|
|
| |
| ▲ | ToValueFunfetti 3 days ago | parent | prev [-] | | It would be useful to supply a definition if your point is that others' definition is wrong. Are you saying they don't deduct inferences from premises? Is it "deduct" that you take issue with? | | |
| ▲ | zahlman 3 days ago | parent [-] | | They do not perform voluntary exploration of the consequences of applying logical rules for deduction; at best they pattern-match. Their model of conceptual meaning (which last I checked still struggles with negation, meta-reference and even simply identifying irrelevant noise) is not grounded in actual observational experience, but only in correlations between text tokens. I think it should be abundantly clear that what ChatGPT does when you ask it to play chess is fundamentally different from what Stockfish does. It isn't just weak and doesn't just make embarrassing errors in generating legal moves (like a blindfolded human might); it doesn't actually "read" and it generates post-hoc rationalization for its moves (which may not be at all logically sound) rather than choosing them with purpose. There are "reasoning models" that improve on this somewhat, but cf. https://news.ycombinator.com/item?id=44455124 from a few weeks ago, and my commentary there https://news.ycombinator.com/item?id=44473615 . | | |
| ▲ | ToValueFunfetti 3 days ago | parent [-] | | Okay, sure. My intuition is that LLMs reason at about a three-year-old level which appears more impressive because of their massive memories. By your definition and criticism, I take it that you wouldn't describe a three-year-old as capable of reasoning, so we're probably on the same page. |
|
|
| |
| ▲ | exasperaited 3 days ago | parent | prev | next [-] | | > A dismissive “no” is not a helpful addition to this discussion. Yes, your "no" must be more upbeat! Even if it's correct. You must be willing to temper the truth of it with something that doesn't hurt the feelings of the massive. > Does it mean that we’ve reached AGI? No. Does it mean the process reflects exactly what humans do? No. But here it's fine to use a "No." because these are your straw men, right? Is it just wrong to use a "No." when it's not in safety padding for the overinvested? | |
| ▲ | GoblinSlayer 3 days ago | parent | prev | next [-] | | I have a hunch it can reflect what humans do "A junior intern who has memorized the Internet and talks without thinking on permanent autopilot". We're just surprised how much humans can do without thinking. | |
| ▲ | southernplaces7 3 days ago | parent | prev [-] | | >A dismissive “no” is not a helpful addition to this discussion. Neither are wide-eyed claims stemming from drinking too much LLM company koolaid. Blatantly mistaken claims don't need more than a curt answer. Why don't I go ahead and claim chatGPT has a soul, so to then get angry if my claim is dismissed? |
|
| |
| ▲ | southernplaces7 3 days ago | parent [-] | | >Do you think humans are logical machines? No, we are not. Therefore, do we not reason? No, but we are conscious, and we know we are conscious, which doesn't require being a logical being too. LLMs on the other hand aren't conscious and there's zero evidence that they are. Thus, they don't reason, since this, unlike logic, does require consciousness. Why not avoid re-definining things into a salad mix of poor logic until you can pretend that something with no evidence in its favor is real. | | |
| ▲ | sothatsit 2 days ago | parent [-] | | The idea that reasoning requires conciousness is very silly. That's not to mention that conciousness is such a poorly defined term in the first place. |
|
|