| ▲ | SweetSoftPillow 4 days ago |
| What is "actual intelligence" and how are you different from a Markov chain? |
|
| ▲ | sixo 4 days ago | parent | next [-] |
| Roughly, actual intelligence needs to maintain a world model in its internal representation, not merely an embedding of language, which is a very different data structure and probably will be learned in a very different way. This includes things like: - a map of the world, or concept space, or a codebase, etc - causality - "factoring" which breaks down systems or interactions into predictable parts Language alone is too blurry to do any of these precisely. |
| |
| ▲ | coldtea 4 days ago | parent | next [-] | | >Roughly, actual intelligence needs to maintain a world model in its internal representation And how's that not like stored information (memories) and weighted links between each and/or between groups of them? | | |
| ▲ | sixo 4 days ago | parent [-] | | It probably is a lot like that! I imagine it's a matter of specializing the networks and learning algorithms to converge to world-model-like-structures rather than language-like-ones. All these models do is approximate the underlying manifold structure, just, the manifold structure of a causal world is different from that of language. |
| |
| ▲ | astrange 4 days ago | parent | prev | next [-] | | > Roughly, actual intelligence needs to maintain a world model in its internal representation This is GOFAI metaphor-based development, which never once produced anything useful. They just sat around saying things like "people have world models" and then decided if they programmed something and called it a "world model" they'd get intelligence, it didn't work out, but then they still just went around claiming people have "world models" as if they hadn't just made it up. An alternative thesis "people do things that worked the last time they did them" explains both language and action planning better; eg you don't form a model of the contents of your garbage in order to take it to the dumpster. https://www.cambridge.org/core/books/abs/computation-and-hum... | | |
| ▲ | sixo 4 days ago | parent [-] | | I see no reason to believe an effective LLM-scale "world-modeling" model would look anything like the kinds of things previous generations of AI researchers were doing. It will probably look a lot more like a transformer architecture--big and compute intensive and with a fairly simple structure--but with a learning process which is different in some key way that make different manifold structures fall out. |
| |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | SweetSoftPillow 4 days ago | parent | prev [-] | | Please check an example #2 here: https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/... It is not "language alone" anymore. LLMs are multimodal nowadays, and it's still just the beginning. And keep in mind that these results are produced by a cheap, small and fast model. | | |
| ▲ | mdaniel 4 days ago | parent | next [-] | | I thought you were making an entirely different point with your link since the lag caused the page to view just the upskirt render until the rest of the images loaded in and it could scroll to the reference of your actual link Anyway, I don't think that's the flex you think it is since the topology map clearly shows the beginning of the arrow sitting in the river and the rendered image decided to hallucinate a winding brook, as well as its little tributary to the west, in view of the arrow. I am not able to decipher the legend [that ranges from 100m to 500m and back to 100m, so maybe the input was hallucinated, too, for all I know] but I don't obviously see 3 distinct peaks nor a basin between the snow-cap and the smaller mound I'm willing to be more liberal for the other two images, since "instructions unclear" about where the camera was positioned, but for the topology one, it had a circle I know I'm talking to myself, though, given the tone of every one of these threads | |
| ▲ | devnullbrain 4 days ago | parent | prev [-] | | Every one of those is the wrong angle |
|
|
|
| ▲ | ornornor 4 days ago | parent | prev | next [-] |
| What I mean is that the current generation of LLMs don’t understand how concepts relate to one another. Which is why they’re so bad at maths for instance. Markov chains can’t deduce anything logically. I can. |
| |
| ▲ | astrange 4 days ago | parent | next [-] | | > What I mean is that the current generation of LLMs don’t understand how concepts relate to one another. They must be able to do this implicitly; otherwise why are their answers related to the questions you ask them, instead of being completely offtopic? https://phillipi.github.io/prh/ A consequence of this is that you can steal a black box model by sampling enough answers from its API because you can reconstruct the original model distribution. | |
| ▲ | oasisaimlessly 4 days ago | parent | prev | next [-] | | The definition of 'Markov chain' is very wide. If you adhere to a materialist worldview, you are a Markov chain. [Or maybe the universe viewed as a whole is a Markov chain.] | |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | anticrymactic 4 days ago | parent | prev | next [-] | | > Which is why they’re so bad at maths for instance. I don't think LLMs currently are intelligent. But please show a GPT-5 chat where it gets any math problem wrong, that most "intelligent" people would get right. | |
| ▲ | sindercal 4 days ago | parent | prev [-] | | You and Chomsky are probably the last 2 persons on earth to believe that. | | |
| ▲ | coldtea 4 days ago | parent | next [-] | | It wouldn't matter if they are both right. Social truth is not reality, and scientific consensus is not reality either (just a good proxy of "is this true", but its been shown to be wrong many times - at least based on a later consensus, if not objective experiments). | |
| ▲ | red75prime 4 days ago | parent | prev [-] | | Nah. There are whole communities that maintain a baseless, but utterly confident dismissive stance. Look in /r/programming, for example. |
|
|
|
| ▲ | ForHackernews 4 days ago | parent | prev [-] |
| For one thing, I have internal state that continues to exist when I'm not responding to text input; I have some (limited) access to my own internal state and can reason about it (metacognition). So far, LLMs do not, and even when they claim they are, they are hallucinating https://transformer-circuits.pub/2025/attribution-graphs/bio... |
| |
| ▲ | bhhaskin 4 days ago | parent | next [-] | | I completely agree. LLMs only do call and response. Without the call there is no response. | | |
| ▲ | recursive 4 days ago | parent [-] | | Would a human born into a sensory deprivation chamber ever make a call? | | |
| |
| ▲ | coldtea 4 days ago | parent | prev [-] | | >For one thing, I have internal state that continues to exist when I'm not responding to text input Do you? Or do you just have memory and are run on a short loop? | | |
| ▲ | shakna 4 days ago | parent [-] | | Whilst all the choices you make tend to be in the grey matter, the rest of you does have internal state - mostly in your white matter. https://scisimple.com/en/articles/2025-03-22-white-matter-a-... | | |
| ▲ | coldtea 4 days ago | parent [-] | | >Whilst all the choices you make tend to be in the grey matter, the rest of you does have internal state - mostly in your white matter. Yeah, but so? Does the substrate of the memory ...matter? (pun intended) When I wrote memory above it could refer to all the state we keep, regardless if it's gray matter, white matter, the gut "second brain", etc. | | |
| ▲ | ForHackernews 4 days ago | parent | next [-] | | Human brains are not computers. There is no "memory" separate from the "processor". Your hippocampus is not the tape for a Turing machine. Everything about biology is complex, messy and analogue. The complexity is fractal: every neuron in your brain is different from every other one, there's further variation within individual neurons, and likely differential expression at the protein level. https://pmc.ncbi.nlm.nih.gov/articles/PMC11711151/ | |
| ▲ | shakna 4 days ago | parent | prev [-] | | As the article above attempts to show, there's no loop. Memory and state isn't static. You are always processing, evolving. That's part of why organizational complexity is one of the underpinnings for consciousness. Because who you are is a constant evolution. |
|
|
|
|