| ▲ | ozgung 5 days ago |
| No it's not "just a very good autocomplete". I don't know why people repeat this thing (it's wrong) but I find it an extremely counterproductive position. Some people just love to dismiss the capabilities of AI with a very shallow understanding of how it works. Why? It generates words one by one, like we all do. This doesn't mean it does just that and nothing else. It's the mechanics of how they are trained and how they do inference. And most importantly how they communicate with us. It doesn't define what they are or their limits. This is reductionism. Ignoring the mathematical complexity of a giant neural network. |
|
| ▲ | Bjartr 5 days ago | parent | next [-] |
| > like we all do Do we though? Sure, we communicate sequentially, but that doesn't mean that our internal effort is piecewise and linear. A modern transformer LLM however is. Each token is sampled from a population exclusively dependent on the tokens that came before it. Mechanistically speaking, it works similarly to autocomplete, but at a very different scale. Now how much of an unavoidable handicap this incurs, if any, is absolutely up for debate. But yes, taking this mechanistic truth and only considering it in a shallow manner underestimates the capability of LLMs by a large degree. |
| |
| ▲ | kenjackson 5 days ago | parent [-] | | Our thinking is also based only on events that occurred previously in time. We don’t use events in the future. | | |
| ▲ | ElevenLathe 5 days ago | parent | next [-] | | Is this a certainty? I thought it was an open question whether quantum effects are at play in the brain, and those have a counterintuitive relationship with time (to vastly dumb things down in a way my grug mind can comprehend). | | |
| ▲ | kenjackson 5 days ago | parent | next [-] | | Well there’s no evidence of this that I’ve seen. If so, then maybe that is what is the blocker for AGI. | | |
| ▲ | ElevenLathe 5 days ago | parent [-] | | I think it's more that there isn't yet evidence against it. In other words, we're not sure or not if the brain has some kind of special sauce that doesn't just reduce to linear algebra. | | |
| ▲ | kenjackson 5 days ago | parent | next [-] | | "I think it's more that there isn't yet evidence against it." We don't? AFAIK we have no proof of anyone being able to see into the future. Now maybe there are other manifestations of this, but I know of no test today that even hints at it. | |
| ▲ | wasabi991011 5 days ago | parent | prev [-] | | Quantum effects definitely reduce to linear algebra however. |
|
| |
| ▲ | wasabi991011 5 days ago | parent | prev [-] | | I'm aware of a counterintuitive relationship with space, but what's the one with time? |
| |
| ▲ | freejazz 5 days ago | parent | prev [-] | | This is unhelpfully obtuse | | |
| ▲ | kenjackson 5 days ago | parent [-] | | What's obtuse about it? It's honestly a very straightforward statement. Every thing we think or say is a function of past events. We don't incorporate future events into what we think or say. Even speculation or imagination of future events occurred in the past (that is the act of imagining it occurred in the past). It's really a super simple concept -- maybe it's so simple that it seems obtuse. | | |
| ▲ | freejazz 5 days ago | parent [-] | | Because the other poster's point wasn't that it was a 'past event.' The point was that it's just predicting based upon the previous token. It's disingenuous to mix the two concepts up. | | |
| ▲ | kenjackson 5 days ago | parent [-] | | > The point was that it's just predicting based upon the previous token. Well that's just wrong. None of the LLMs of interest predict based upon the previous token. |
|
|
|
|
|
|
| ▲ | CamperBob2 5 days ago | parent | prev | next [-] |
| I don't know why people repeat this thing (it's wrong) Because they simply don't care if they're wrong. At this point, given what we've seen, that seems like the only explanation left. You don't need to be a fanatical AGI evangelist, but when an "autocomplete" starts winning international math competitions, you need to start calling it something else. |
|
| ▲ | 5 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | karmakaze 5 days ago | parent | prev [-] |
| I can't say for certain that our wetware isn't "just a very good autocomplete". |
| |
| ▲ | esafak 5 days ago | parent [-] | | A very good autocomplete is realized by developing an understanding. |
|