Remix.run Logo
Lerc 3 hours ago

I don't really understand the argument that AGI cannot be achieved just by scaling current methods. I too believe that (for any sane level of scaling anyway), but this-year's LLMs are not using entirely last-year's methods. And they, in turn, are using methods that weren't used the year before.

It seems like a prediction like "Bob won't become a formula one driver in a minivan". It's true, but not very interesting.

If Bob turned up a couple of years later in Formula one, you'd probably be right in saying that what he is driving is not a mini van. The same is true for AGI anyone who says it can't be done with current methods can point to any advancement along the way and say that's the difference.

A better way to frame it would be, is there any fundimental, quantifiable ability that is blocking AGI? I would not be surprised if the breakthrough technique has been created, but the research has not described the problem that it solves well enough for us to know that it is the breakthrough.

I realise that, for some the notion of AGI is relatively new, but some of us have been considering the matter for some time. I suspect my first essay on the topic was around 1993. It's been quite weird watching people fall into all of the same philosophical potholes that were pointed out to us at university.

hunterpayne 2 hours ago | parent | next [-]

Then you don't understand Machine Learning in any real way. Literally the 3rd or 4th thing you learn about ML is that for any given problem, there is an ideal model size. Just making the model bigger doesn't work because of something called the curse of dimensionality. This is something we have discovered about every single problem and type of learning algorithm used in ML. For LLMs, we probably moved past the ideal model size about 18 months ago. From the POV of something who actually learned ML in school (from the person who coined the term), I see no real reason to think that AGI will happen based upon the current techniques. Maybe someday. Probably not anytime soon.

PS The first thing you learn about ML is to compare your models to random to make sure the model didn't degenerate during training.

Lerc an hour ago | parent [-]

Um, what? Are you interpreting scaling to mean adding parameters and nothing else?

I'm not entirely sure where you get your confidence that we've past the ideal model size from, but at least that's a clear prediction so you should be able to tell if and when you are proven wrong.

Just for the record, do you care to put an actual number on something we won't go past?

[edit] Vibe check on user comes out as

    Contrarian 45%
    Pedantic 35%
    Skeptical 15%
    Direct  5%
That's got to be some sort of record.
trial3 3 hours ago | parent | prev [-]

i think the minivan analogy is flawed, and that AGI is moving from "bob driving a minivan" to "bob literally becoming the thing that is formula one"

Lerc 3 hours ago | parent [-]

What would that even mean though? Who is making claims of that sort?

I feel like it's such a bending of the idea,that it's not really making a prediction of anything at all.