Remix.run Logo
lysace 5 days ago

Really stupid question: How is Gemini-like 'thinking' separate from artificial general intelligence (AGI)?

When I ask Gemini 3 Flash this question, the answer is vague but agency comes up a lot. Gemini thinking is always triggered by a query.

This seems like a higher-level programming issue to me. Turn it into a loop. Keep the context. Those two things make it costly for sure. But does it make it an AGI? Surely Google has tried this?

CamperBob2 5 days ago | parent | next [-]

I don't think we'll get genuine AGI without long-term memory, specifically in the form of weight adjustment rather than just LoRAs or longer and longer contexts. When the model gets something wrong and we tell it "That's wrong, here's the right answer," it needs to remember that.

Which obviously opens up a can of worms regarding who should have authority to supply the "right answer," but still... lacking the core capability, AGI isn't something we can talk about yet.

LLMs will be a part of AGI, I'm sure, but they are insufficient to get us there on their own. A big step forward but probably far from the last.

bananaflag 5 days ago | parent [-]

> When the model gets something wrong and we tell it "That's wrong, here's the right answer," it needs to remember that.

Problem is that when we realize how to do this, we will have each copy of the original model diverge in wildly unexpected ways. Like we have 8 billion different people in this world, we'll have 16 gazillion different AIs. And all of them interacting with each other and remembering all those interactions. This world scares me greatly.

criley2 5 days ago | parent | prev | next [-]

Advanced reasoning LLM's simulate many parts of AGI and feel really smart, but fall short in many critical ways.

- An AGI wouldn't hallucinate, it would be consistent, reliable and aware of its own limitations

- An AGI wouldn't need extensive re-training, human reinforced training, model updates. It would be capable of true self-learning / self-training in real time.

- An AGI would demonstrate real genuine understanding and mental modeling, not pattern matching over correlations

- It would demonstrate agency and motivation, not be purely reactive to prompting

- It would have persistent integrated memory. LLM's are stateless and driven by the current context.

- It should even demonstrate consciousness.

And more. I agree that what've we've designed is truly impressive and simulates intelligence at a really high level. But true AGI is far more advanced.

waffletower 5 days ago | parent | next [-]

Humans can fail at some of these qualifications, often without guile: - being consistent and knowing their limitations - people do not universally demonstrate effective understanding and mental modeling.

I don't believe the "consciousness" qualification is at all appropriate, as I would argue that it is a projection of the human machine's experience onto an entirely different machine with a substantially different existential topology -- relationship to time and sensorium. I don't think artificial general intelligence is a binary label which is applied if a machine rigidly simulates human agency, memory, and sensing.

versteegen 4 days ago | parent | prev | next [-]

> - It should even demonstrate consciousness.

I disagreed with most of your assertions even before I hit the last point. This is just about the most extreme thing you could ask for. I think very few AI researchers would agree with this definition of AGI.

lysace 5 days ago | parent | prev [-]

Thanks for humoring my stupid question with a great answer. I was kind of hoping for something like this :).

dcre 5 days ago | parent | prev | next [-]

This is what every agentic coding tool does. You can try it yourself right now with the Gemini CLI, OpenCode, or 20 other tools.

andai 5 days ago | parent | prev [-]

AGI is hard but we can solve most tasks with artificial stupidity in an `until done`.

lysace 5 days ago | parent [-]

Just a matter of time and cost. Eventually...