| ▲ | dernett 18 hours ago |
| This is crazy. It's clear that these models don't have human intelligence, but it's undeniable at this point that they have _some_ form of intelligence. |
|
| ▲ | brendyn 17 hours ago | parent | next [-] |
| If LLMs weren't created by us but where something discovered in another species' behaviour it would be 100% labelled intelligence |
| |
| ▲ | te0006 10 hours ago | parent [-] | | Yes, same for the case where the technology would have been found embodied in machinery aboard a crashed UFO. |
|
|
| ▲ | qudat 18 hours ago | parent | prev | next [-] |
| My take is that a huge part of human intelligence is pattern matching. We just didn’t understand how much multidimensional geometry influenced our matches |
| |
| ▲ | keeda 17 hours ago | parent | next [-] | | Yes, it could be that intelligence is essentially a sophisticated form of recursive, brute force pattern matching. I'm beginning to think the Bitter Lesson applies to organic intelligence as well, because basic pattern matching can be implemented relatively simply using very basic mathematical operations like multiply and accumulate, and so it can scale with massive parallelization of relatively simple building blocks. | | |
| ▲ | bob1029 15 hours ago | parent [-] | | Intelligence is almost certainly a fundamentally recursive process. The ability to think about your own thinking over and over as deeply as needed is where all the magic happens. Counterfactual reasoning occurs every time you pop a mental stack frame. By augmenting our stack with external tools (paper, computers, etc.), we can extend this process as far as it needs to go. LLMs start to look a lot more capable when you put them into recursive loops with feedback from the environment. A trillion tokens worth of "what if..." can be expended without touching a single token in the caller's context. This can happen at every level as many times as needed if we're using proper recursive machinery. The theoretical scaling around this is extremely favorable. | | |
| |
| ▲ | sdwr 17 hours ago | parent | prev | next [-] | | I don't think it's accurate to describe LLMs as pattern matching. Prediction is the mechanism they use to ingest and output information, and they end up with a (relatively) deep model of the world under the hood. | | |
| ▲ | visarga 16 hours ago | parent | next [-] | | The "pattern matching" perspective is true if you zoom in close enough, just like "protein reactions in water" is true for brains. But if you zoom out you see both humans and LLMs interact with external environments which provide opportunity for novel exploration. The true source of originality is not inside but in the environment. Making it be all about the model inside is a mistake, what matters more than the model is the data loop and solution space being explored. | |
| ▲ | qudat 8 hours ago | parent | prev | next [-] | | > I don't think it's accurate to describe LLMs as pattern matching I’m talking about the inference step, which uses tensor geometry arithmetic to find patterns in text. We don’t understand what those patterns are but it’s clear it’s doing some heavy lifting since llm inference is expressing logic and reasoning under the guise of our reductive “next token prediction” | |
| ▲ | D-Machine 17 hours ago | parent | prev | next [-] | | "Pattern matching" is not sufficiently specified here for us to say if LLMs do pattern matching or not. E.g. we can say that an LLM predicts the next token because that token (or rather, its embedding) is the best "match" to the previous tokens, which form a path ("pattern") in embedding space. In this sense LLMs are most definitely pattern matching. Under other formulations of the term, they may not be (e.g. when pattern matching refers to abstraction or abstracting to actual logical patterns, rather than strictly semantic patterns). | |
| ▲ | keeda 17 hours ago | parent | prev | next [-] | | Yes, the world model building is achieved via pattern matching and happens during ingestion and training, but that is also part of the intelligence. | |
| ▲ | DrewADesign 17 hours ago | parent | prev [-] | | Which is even more true for humans. |
| |
| ▲ | csomar 15 hours ago | parent | prev [-] | | Intelligence is hallucination that happens to produce useful results in the real world. |
|
|
| ▲ | threethirtytwo 17 hours ago | parent | prev | next [-] |
| I don't think they will ever have human intelligence. It will always be an alien intelligence. But I think the trend line unmistakably points to a future where it can be MORE intelligent than a human in exactly the colloquial way we define "more intelligent" The fact that one of the greatest mathematicians alive has a page and is seriously bench marking this shows how likely he believes this can happen. |
|
| ▲ | eru 15 hours ago | parent | prev | next [-] |
| Well, Alpha Go and Stockfish can beat you at their games. Why shouldn't these models beat us at math proofs? |
| |
| ▲ | _fizz_buzz_ 13 hours ago | parent | next [-] | | Chess and Go have very restrictive rules. It seems a lot more obvious to me why a computer can beat a human at it. They have a huge advantage just by being able to calculate very deep lines in a very short time. I actually find it impressive for how long humans were able to beat computers at go. Math proofs seem a lot more open ended to me. | |
| ▲ | thfuran 15 hours ago | parent | prev [-] | | Alpha go and stockfish were specifically designed and trained to win at those games. | | |
| ▲ | Davidzheng 14 hours ago | parent [-] | | And we can train models specifically at math proofs? I think only difference is that math is bigger.... |
|
|
|
| ▲ | altmanaltman 17 hours ago | parent | prev | next [-] |
| Depends on what you mean by intelligence, human intelligence and human |
|
| ▲ | xyzsparetimexyz 10 hours ago | parent | prev | next [-] |
| Yes it is intelligent, but so what? Its not conscious, sentient or sapient. It's a pattern matching chinese room. |
|
| ▲ | ekianjo 17 hours ago | parent | prev | next [-] |
| It's pattern matching. Which is actually what we measure in IQ tests, just saying. |
| |
| ▲ | jadenpeterson 17 hours ago | parent | next [-] | | There's some nuance. IQ tests measure pattern matching and, in an underlying way, other facets of intelligence - memory, for example. How well can an LLM 'remember' a thing? Sometimes Claude will perform compaction when its context window reaches 200k "tokens" then it seems a little colder to me, but maybe that's just my imagination. I'm kind of a "power user". | |
| ▲ | rurban 17 hours ago | parent | prev [-] | | I call it matching. Pattern matching had a different meaning. | | |
| ▲ | ekianjo 16 hours ago | parent [-] | | what are you referring to? LLMs are neural networks at their core and the most simple versions of neural networks are all about reproducing patterns observed during training | | |
| ▲ | rurban 15 hours ago | parent [-] | | You need to understand the difference between general matching and pattern matching. Maybe should have read more older AI books. A LLM is a general fuzzy matcher. A pattern matcher is an exact matcher using an abstract language, the "pattern". A general matcher uses a distance function instead, no pattern needed. Ie you want to find a subimage in a big image, possibly rotated, scaled, tilted, distorted, with noise. You cannot do that with a pattern matcher, but you can do that with a matcher, such as a fuzzy matcher, a LLM. You want to find a go position on a go board. A LLM is perfect for that, because you don't need to come up with a special language to describe go positions (older chess programs did that), you just train the model if that position is good or bad, and this can be fully automated via existing literature and later by playing against itself. You train the matcher not via patterns but a function (win or loose). |
|
|
|
|
| ▲ | TZubiri 16 hours ago | parent | prev [-] |
| As someone who doesn't understand this shit, and how it's always the experts who fiddle the LLMs to get good outputs, it feels natural to attribute the intelligence to the operator (or the training set), rather than the LLM itself. |