Remix.run Logo
adastra22 7 days ago

> Indeed I have no experience with Claude Code, but I use Claude via chat...

These are not even remotely similar, despite the name. Things are moving very fast, and the sort of chat-based interface that you describe in your article is already obsolete.

Claude is the LLM model. Claude Code is a combination of internal tools for the agent to track its goals, current state, priorities, etc., and a looped mechanism for keeping it on track, focused, and debugging its own actions. With the proper subagents it can keep its context from being poisoned from false starts, and its built-in todo system keeps it on task.

Really, try it out and see for yourself. It doesn't work magic out of the box, and absolutely needs some hand-holding to get it to work well, but that's only because it is so new. The next generation of tooling will have these subagent definitions auto selected and included in context so you can hit the ground running.

We are already starting to see a flood of software coming out with very few active coders on the team, as you can see on the HN front page. I say "very few active coders" not "no programmers" because using Claude Code effectively still requires domain expertise as we work out the bugs in agent orchestration. But once that is done, there aren't any obvious remaining stumbling blocks to a PM running a no-coder, all-AI product team.

TheOtherHobbes 7 days ago | parent [-]

Claude Code isn't an LLM. It's a hybrid architecture where an LLM provides the interface and some of the reasoning, embedded inside a broader set of more or less deterministic tools.

It's obvious LLMs can't do the job without these external tools, so the claim above - that LLMs can't do this job - is on firm ground.

But it's also obvious these hybrid systems will become more and more complex and capable over time, and there's a possibility they will be able to replace humans at every level of the stack, from junior to CEO.

If that happens, it's inevitable these domain-specific systems will be networked into a kind of interhybrid AGI, where you can ask for specific outputs, and if the domain has been automated you'll be guided to what you want.

It's still a hybrid architecture though. LLMs on their own aren't going to make this work.

It's also short of AGI, never mind ASI, because AGI requires a system that would create high quality domain-specific systems from scratch given a domain to automate.

adastra22 7 days ago | parent [-]

If you want to be pedantic about word definitions, it absolutely is AGI: artificial general intelligence.

Whether you draw the system boundary of an LLM to include the tools it calls or not is a rather arbitrary distinction, and not very interesting.

nomel 7 days ago | parent | next [-]

Nearly every definition I’ve seen that involves AGI (there are many) includes the ability to self learn and create “novel ideas”. The LLM behind it isn’t capable of this, and I don’t think the addition of the current set of tools enables this either.

adastra22 7 days ago | parent [-]

Artificial general intelligence was a phrase invented to draw distinction from “narrow intelligence” which are algorithms that can only be applied to specific problem domains. E.g. Deep Blue was amazing at playing chess, but couldn’t play Go much less prioritize a grocery list. Any artificial program that could be applied to arbitrary tasks not pre-trained on is AGI. ChatGPT and especially more recent agentic models are absolutely and unquestionably AGI in the original definition of the term.

Goalposts are moving though. Through the efforts of various people in the rationalist-connected space, the word has since morphed to be implicitly synonymous with the notion of superintellgence and self-improvement, hence the vague and conflicting definitions people now ascribe to it.

Also, fwiw the training process behind the generation of an LLM is absolutely able to discover new and novel ideas, in the same sense that Kepler’s laws of planetary motion were new and novel if all you had were Tycho Brache’s astronomical observations. Inference can tease out these novel discoveries, if nothing else. But I suspect also that your definition of creative and novel would also exclude human creativity if it were rigorously applied—our brains after all are merely remixing our own experiences too.

Vegenoid 7 days ago | parent | prev [-]

> If you want to be pedantic about word definitions, it absolutely is AGI: artificial general intelligence.

This isn't being pedantic, it's deliberately misinterpreting a commonly used term by taking every word literally for effect. Terms, like words, can take on a meaning that is distinct from looking at each constituent part and coming up with your interpretation of a literal definition based on those parts.

adastra22 7 days ago | parent [-]

I didn't invent this interpretation. It's how the word was originally defined, and used for many, many decades, by the founders of the field. See for example:

https://www-formal.stanford.edu/jmc/generality.pdf

Or look at the old / early AGI conference series:

https://agi-conference.org

Or read any old, pre-2009 (ImageNet) AI textbook. It will talk about "narrow intelligence" vs "general intelligence," a dichotomy that exists more in GOFAI than the deep learning approaches.

Maybe I'm a curmudgeon and this is entering get-off-my-lawn territory, but I find it immensely annoying when existing clear terminology (AGI vs ASI, strong vs weak, narrow vs. general) is superseded by a confused mix of popular meanings that lack any clear definition.

scoopdewoop 5 hours ago | parent | next [-]

I'm a week late, but I do appreciate you pointing out this real phenomenon of moving the goalpost. Language is really general, multimodal models even more-so. The idea that AGI should be way more anthropomorphic and omnipotent is really recent. New definitions almost disregard the possibility of stupid general intelligence, despite proof-by-existence living all around us.

Vegenoid 7 days ago | parent | prev [-]

The McCarthy paper doesn't use the term "artificial general intelligence" anywhere. It does use the word "general" a lot in relation to artificial intelligence.

I looked at the AGI conference page for 2009: https://agi-conference.org/2009/

When it uses the term "artificial general intelligence", it hyperlinks to this page: http://www.agiri.org/wiki/index.php?title=Artificial_General...

Which seems unavailable, so here is an archive from 2007: https://web.archive.org/web/20070106033535/http://www.agiri....

And that page says "In Nov. 1997, the term Artificial General Intelligence was first coined by Mark Avrum Gubrud in the abstract for his paper Nanotechnology and International Security". And here is that paper: https://web.archive.org/web/20070205153112/http://www.foresi...

That paper says: "By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed."

I think that your insisting that AGI means something different than what everyone else means when they say it is not useful, and will only lead to people getting confused and disagreeing with you. I agree that it's not a great term.