Remix.run Logo
stuaxo 6 days ago

Its insane really, anyone who has worked with LLMs for a bit and has an idea of how they work shouldn't think its going to lead to "AGI".

Hopefully some big players, like FB bankrupt themselves.

IanCal 6 days ago | parent | next [-]

Tbh I find this view odd, and I wonder what people view as agi now. It used to be that we had extremely narrow pieces of AI and I remember being on a research project about architectures and just very basic “what’s going on?” was advanced. Understanding that someone asked a question, that would be solved by getting a book and being able to then go and navigate to the place the book was likely to be was fancy. Most systems could solve literally one type of problem. They weren’t just bad at other things they were fundamentally incapable of anything but an extremely narrow use case.

I can throw wide ranging problems at things like gpt5 and get what seem like dramatically better answers than if I asked a random person. The amount of common sense is so far beyond what we had it’s hard to express. It used to be always pointed out that the things we had were below basic insect level. Now I have something that can research a charity, find grants and make coherent arguments for them, read matrix specs and debug error messages, and understand sarcasm.

To me, it’s clear that agi is here. But then what I always pictured from it may be very different to you. What’s your image of it?

whizzter 6 days ago | parent | next [-]

It's more that "random" people are dumb as bricks (but we've in the name of equality and historic measurement errors decided to forgo that), add to it that AI's have a phenomenal (internet sized) memory makes them far more capable than many people.

However, even "dumb" people can often make judgements structures in a way that AI's cannot, it's just that many have such a bad knowledge-base that they cannot build the structures coherently whereas AI's succeed thanks to their knowledge.

I wouldn't be surprised if the top AI firms today spend an inordinate amount of time to build "manual" appendages into the LLM systems to cater to tasks such as debugging to uphold the facade that the system is really smart, while in reality it's mostly papering up a leaky model to avoid losing the enormous investments they need to stay alive with a hope that someone on their staff comes up a real solution to self-learning.

https://magazine.sebastianraschka.com/p/understanding-reason...

adwn 6 days ago | parent | prev | next [-]

I think the discrepancy between different views on the matter mainly stems from the fact that state-of-the-art LLMs are better (sometimes extremely better) at some tasks, and worse (sometimes extremely worse) at other tasks, compared to average humans. For example, they're better at retrieving information from huge amounts of unstructured data. But they're also terrible at learning: any "experience" which falls out of the context window is lost forever, and the model can't learn from its mistakes. To actually make it learn something requires very many examples and a lot of compute, whereas a human can permanently learn from a single example.

andsoitis 6 days ago | parent [-]

> human can permanently learn from a single example

This, to me at least, seems like an important ingredient to satisfying a practical definition / implementation of AGI.

Another might be curiosity, and I think perhaps also agency.

Yoric 6 days ago | parent | prev | next [-]

I think it's clear that nobody agrees what AGI is. OpenAI describes it in terms of revenue. Other people/orgs in terms of, essentially, magic.

If I had to pick a name, I'd probably describe ChatGPT & co as advanced proof of concepts for general purpose agents, rather than AGI.

delecti 6 days ago | parent [-]

> I think it's clear that nobody agrees what AGI is

People selling AI products are incentivized to push misleading definitions of AGI.

boppo1 6 days ago | parent | prev | next [-]

Human-level intelligence. Being able to know what it doesn't know. Having a practical grasp on the idea of truth. Doing math correctly, every time.

I give it a high-res photo of a kitchen and ask it to calculate the volume of a pot in the image.

tomaskafka 6 days ago | parent | next [-]

You discover truth by doing stuff in real world and observing the results. Current LLM have enough intelligence, but all the inputs they have are the “he said she said” by us monkeys, including all omissions and biases.

snapcaster 6 days ago | parent | prev | next [-]

But many humans can't do a lot of those things and we still consider them "generally intelligent"

293984j29384 6 days ago | parent | prev [-]

None of what you describe would I label within the realm of 'average'

swiftcoder 6 days ago | parent [-]

It's not about what the average human can do - it's about what humans as a category are capable of. There will always be outliers (in both directions), but you can, in general, teach a human a variety of tasks, such as performing arithmetic deterministically, that you cannot teach to, for example, a parrot.

audunw 5 days ago | parent | prev | next [-]

I don’t have a very high expectation of AGI at all. Just an algorithm or system you can put onto a robot dog, and get a dog level general intelligence. You should be able to live with that robot dog for 10 years and it should be just as capable as a dog throughout that timespan.

Hell, I’d even say we have AGI if you could emulate something like a hamster.

LLMs are way more impressive in certain ways than such a hypothetical AGI. But that has been true of computers for a long time. Computers have been much better at Chess than humans for decades. Dogs can’t do that. But that doesn’t mean that a chess engine is an AGI.

I would also say we have a special form of AGI if the AI can pass an extended Turing test. We’ve had chat bots that can fool a human for a minute for a long time. Doesn’t mean we had AGI. So time and knowledge was always a factor in a realistic Turing test. If an AGI can fool someone who knows how to properly probe an LLM, for a month or so, while solving a bunch of different real world tasks that require stable long term memory and planning, then I’d day we’re in AGI territory for language specifically. I think we have to distinguish between language AGI and multi-modal AGI. So this test wouldn’t prove what we could call “full” AGI.

These are some of the missing components for full AGI: - Being able to act as a stable agent with a stable personality over long timespans - Capable of dealing with uncertainties. Having a understanding of what it doesn’t know - One-shot learning, with long term retention, for a large number of things - Fully integrated multi-modality across sound, vision, and other inputs/outputs we may throw at it.

The last one is where we may be able to get at the root of the algorithm we’re missing. A blind person can learn to “see” by making clicks and using their ears to see. Animals can do similar “tricks”. I think this is where we truly see the full extent of the generality and adaptability of the biological brain. Imagine trying to make a robot that can exhibit this kind of adaptability. It doesn’t fit into the model we have for AI right now.

homarp 6 days ago | parent | prev | next [-]

my picture of AGI is 1) autonomous improvement 2) ability to say 'i don't know/can't be done'

dmboyd 6 days ago | parent [-]

I wonder if 2) is a result of published bias for positive results in the training set. An “I don’t know” response is probably ranked unsatisfactory by human feedback and most published scientific literature are biased towards positive results and factual explanations.

InitialLastName 6 days ago | parent [-]

In my experience, the willingness to say "I don't know" instead of confabulate is also down-rated as a human attribute, so it's not surprising that even an AGI trained on the "best" of humanity would avoid it.

AlienRobot 6 days ago | parent | prev [-]

Nobody is saying that LLM's don't work like magic. I know how neural networks work and they still feel like voodoo to me.

What we are saying is that LLM's can't become AGI. I don't know what AGI will look like, but it won't look like an LLM.

There is a difference between being able to melt iron and being able to melt tungsten.

thaawyy33432434 6 days ago | parent | prev | next [-]

Recently I realized that US are very close to a centrally planned economy. Meta wasted 50B on metaverse, which like how much Texas spends on healthcare. Now the "AI" investments seems dubious.

You could fund 1000+ projects with this kinds of money. This is not an effective capital allocation.

amelius 6 days ago | parent | prev | next [-]

The only way we'll have AGI is if people get dumber. Using modern tech like LLMs makes people dumber. Ergo, we might see AGI sooner than expected.

menaerus 6 days ago | parent | prev | next [-]

> ... and has an idea of how they work shouldn't think its going to lead to "AGI"

Not sure what level of understanding are you referring to but having learned and researched about the pretty much all LLM internals I think this has led me exactly to the opposite line of thinking. To me it's unbelievable what we have today.

janalsncm 6 days ago | parent | prev | next [-]

I think AI research is like anything else really. The smartest people are heads down working on their problems. The people going on podcasts are less connected to day to day work.

It’s also pretty useless to talk about whether something is AGI without defining intelligence in the first place.

foobarian 6 days ago | parent | prev | next [-]

I think something like we saw in the show "Devs" is much more likely, although what the developers did with it in the show was bonkers unrealistic. But some kind of big enough quantum device basically.

guardian5x 6 days ago | parent | prev | next [-]

Just scaling them up might not leat to "AGI", but they can still lead to AGI as a bridge.

meowface 6 days ago | parent | prev | next [-]

This is not and has not been the consensus opinion. If you're not an AI researcher you shouldn't write as if you've set your confidence parameter to 0.95.

Of course it might be the case, but it's not a thing that should be expressed with such confidence.

blackhaz 6 days ago | parent | prev [-]

Is it widely accepted that LLMs won't lead to AGI? I've asked Gemini, so it came up with four primary arguments for this claim, commenting on them briefly:

1) LLMs as simple "next token predictors" so they just mimicry thinking: But can it be argued that current models operate on layers of multiple depth and are able to actually understand by building concepts and making connections on abstract levels? Also, don't we all mimicry?

2) Grounding problem: Yes, models build their world models on text data, but we have models operating on non-textual data already, so this appears to be a technical obstacle rather than fundamental.

3) Lack of World Model. But can anyone really claim they have a coherent model of reality? There are flat-earthers, yet I still wouldn't deny them having AGI. People hallucinate and make mistakes all the time. I'd argue hallucinations is in fact the sign of an emerging intelligence.

4) Fixed learning data sets. Looks like this is now being actively solved with self-improving models?

I just couldn't find a strong argument supporting this claim. What am I missing?

globnomulous 6 days ago | parent | next [-]

Why on earth would you copy and paste an LLM's output into a comment? What does that accomplish or provide that just a simply stated argument doesn't accomplish more succinctly? If you don't know something, simply don't comment on it -- or ask a question.

blackhaz 6 days ago | parent [-]

None of the above is AI.

globnomulous 4 days ago | parent [-]

> I've asked Gemini, so it came up with four primary arguments for this claim, commenting on them briefly:

This line means, and literally says, that everything that follows is a summary or direct quotation from an LLM's output.

There's a more charitable but unintuitive interpretation, in which "commenting on them briefly" is intended to mean "I will comment on them briefly:". But this isn't a natural interpretation. It's one I could be expected to reach only after seeing your statement that 'none of the above is AI.' But even this more charitable interpretation actually contradicts your claim that it's not AI.

So now I'm even less sure I know what you meant to communicate. Either I'm missing something really obvious or the writing doesn't communicate what you intended.

welferkj 6 days ago | parent | prev [-]

Fur future reference, pasting llm slop feels exactly as patronizing as back when people pasted links to google searches in response to questions they considered beneath their dignity to answer. Except in this case, no-one asked to begin with.