Remix.run Logo
fmap 10 hours ago

> The idea is, if we don't think of anything more efficient, we'll at least be able to simulate a cat, and then an idiot, and then Einstein, and then something smarter. And since we almost certainly will think of something more efficient than "simulate a human brain", we should expect superintelligence to come much sooner.

The problem with this argument is that it's assuming that we're on a linear track to more and more intelligent machines. What we have with LLMs isn't this kind of general intelligence.

We have multi-paragraph autocomplete that's matching existing texts more and more closely. The resulting models are great priors for any kind of language processing and have simple reasoning capabilities in so far as those are present in the source texts. Using RLHF to make the resulting models useful for specific tasks is a real achievement, but doesn't change how the training works or what the original training objective was.

So let's say we continue along this trajectory and we finally have a model that can faithfully reproduce and identify every word sequence in its training data and its training data includes every word ever written up to that point. Where do we go from here?

Do you want to argue that it's possible that there is a clever way to create AGI that has nothing to do with the way current models work and that we should be wary of this possibility? That's a much weaker argument than the one in the article. The article extrapolates from current capabilities - while ignoring where those capabilities come from.

> And, even if you think A or B are unlikely, doesn't it make sense to just consider the possibility that they're true, and think about how we'd know and what we could do in response, to prevent C or D?

This is essentially https://plato.stanford.edu/entries/pascal-wager/

It might make sense to consider, but it doesn't make sense to invest non-trivial resources.

This isn't the part that bothers me at all. I know people who got grants from, e.g., Miri to work on research in logic. If anything, this is a great way to fund some academic research that isn't getting much attention otherwise.

The real issue is that people are raising ridiculous amounts of money by claiming that the current advances in AI will lead to some science fiction future. When this future does not materialize it will negatively affect funding for all work in the field.

And that's a problem, because there is great work going on right now and not all of it is going to be immediately useful.

hannasanarion 7 hours ago | parent | next [-]

> So let's say we continue along this trajectory and we finally have a model that can faithfully reproduce and identify every word sequence in its training data and its training data includes every word ever written up to that point. Where do we go from here?

This is a fundamental misunderstanding of the entire point of predictive models (and also of how LLMs are trained and tested).

For one thing, ability to faithfully reproduce texts is not the primary scoring metric being used for the bulk of LLM training and hasn't been for years.

But more importantly, you don't make a weather model so that it can inform you of last Tuesday's weather given information from last Monday, you use it to tell you tomorrow's weather given information from today. The totality of today's temperatures, winds, moistures, and shapes of broader climatic patterns, particulates, albedos, etc etc etc have never happened before, and yet the model tells us something true about the never-before-seen consequences of these never-before-seen conditions, because it has learned the ability to reason new conclusions from new data.

Are today's "AI" models a glorified autocomplete? Yeah, but that's what all intelligence is. The next word I type is the result of an autoregressive process occurring in my brain that produces that next choice based on the totality of previous choices and experiences, just like the Q-learners that will kick your butt in Starcraft choose the best next click based on their history of previous clicks in the game combined with things they see on the screen, and will have pretty good guesses about which clicks are the best ones even if you're playing as Zerg and they only ever trained against Terran.

A highly accurate autocomplete that is able to predict the behavior and words of a genius, when presented with never before seen evidence, will be able to make novel conclusions in exactly the same way as the human genius themselves would when shown the same new data. Autocomplete IS intelligence.

New ideas don't happen because intelligences draw them out of the aether, they happen because intelligences produce new outputs in response to stimuli, and those stimuli can be self-inputs, that's what "thinking" is.

If you still think that all today's AI hubbub is just vacuous hype around an overblown autocomplete, try going to Chatgpt right now. Click the "deep research" button, and ask it "what is the average height of the buildings in [your home neighborhood]"?, or "how many calories are in [a recipe that you just invented]", or some other inane question that nobody would have ever cared to write about ever before but is hypothetically answerable from information on the internet, and see if what you get is "just a reproduced word sequence from the training data".

gwd 3 hours ago | parent | prev [-]

> We have multi-paragraph autocomplete that's matching existing texts more and more closely.

OK, I think I see where you're coming from. It sounds like what you're saying is:

E. LLMs only do multi-paragraph autocomplete; they are and always will be incapable of actual thinking.

F. Any approach capable of achieving AGI will be completely different in structure. Who knows if or when this alternate approach will even be developed; and if it is developed, we'll be starting from scratch, so we'll have plenty of time to worry about progress then.

With E, again, it may or may not be true. It's worth noting that this is a theoretical argument, not an empirical one; but I think it's a reasonable assumption to start with.

However, there are actually theoretical reasons to think that E may be false. The best way to predict the weather is to have an internal model which approximates weather systems; the best way to predict the outcome of a physics problem is to have an internal model which approximates the physics of the thing you're trying to predict. And the best way to predict what a human would write next is to have a model of a human mind -- including a model of what the human mind has in its model (e.g., the state of the world).

There is some empirical data to support this argument, albeit in a very simplified manner: They trained a simple LLM to predict valid moves for Othello, and then probed it and discovered an internal Othello board being simulated inside the neural network:

https://thegradient.pub/othello/

And my own experience with LLMs better match the "LLMs have an internal model of the world" theory than the "LLMs are simply spewing out statistical garbage" theory.

So, with regard to E: Again, sure, LLMs may turn out to be a dead end. But I'd personally give the idea that LLMs are a complete dead end a less than 50% probability; and I don't think giving it an overwhelmingly high probability (like 1 in a million of being false) is really reasonable, given the theoretical arguments and empirical evidence against it.

With regard to F, again, I don't think this is true. We've learned so much about optimizing and distilling neural nets, optimizing training, and so on -- not to mention all the compute power we've built up. Even if LLMs are a dead end, whenever we do find an architecture capable of achieving AGI, I think a huge amount of the work we've put into optimizing LLMs will put is way ahead in optimizing this other system.

> ...that the current advances in AI will lead to some science fiction future.

I mean, if you'd told me 5 years ago that I'd be able to ask a computer, "Please use this Golang API framework package to implement CRUD operations for this particular resource my system has", and that the resulting code would 1) compile out of the box, 2) exhibit an understanding of that resource and how it relates to other resources in the system based on having seen the code implementing those resources 3) make educated guesses (sometimes right, sometimes wrong, but always reasonable) about details I hadn't specified, I don't think I would have believed you.

Even if LLM progress is logarithmic, we're already living in a science fiction future.

EDIT: The scenario actually has very good technical "asides"; if you want to see their view of how a (potentially dangerous) personality emerges from "multi-paragraph auto-complete", look at the drop-down labelled "Alignment over time", and specifically what follows "Here’s a detailed description of how alignment progresses over time in our scenario:".

https://ai-2027.com/#alignment-over-time