Remix.run Logo
bigstrat2003 3 hours ago

> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1].

Well, of course. Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on.

coldtea an hour ago | parent | next [-]

>Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on

People also "synthesize from the data they were trained on". Intelligence is a result of that. So this dead-end argument then turns into begging the question: LLMs don't have intelligence because LLMs can't have intelligence.

lateforwork 3 hours ago | parent | prev | next [-]

> don't have a shred of intelligence. ... They don't understand, only synthesize from the data they were trained on.

Couldn't you say that about 99% of humans too?

chongli 2 hours ago | parent | next [-]

99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art. But it's a different 1% for every area of expertise! Add it all up and you get a lot more than 1% of humans contributing to the sum of knowledge.

And of course, if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more. Sure, much of this knowledge may not be widespread (it may be locked up within private institutions) but its impact can still be felt throughout the economy.

coldtea an hour ago | parent [-]

>99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art

How? By also "synthesizing the data they were trained on" (their experience, education, memories, etc.).

chongli an hour ago | parent [-]

No, that's not all we're doing. If that's all humans ever did, we'd still be living in the stone age.

sally_glance 24 minutes ago | parent [-]

Can we be sure? Maybe it's just very rare for experience, education and memories to line up in exactly the way that allows synthesizing something innovative. So it requires a few billion candidates and maybe a couple of generations too.

chongli 17 minutes ago | parent [-]

I want to point back to my remark about everyday people.

if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more

This isn't a throwaway comment. I do this all the time myself, at work. Everywhere I've worked, I do this. I challenge the assumptions and try to make things better. It's not a rare thing at all, it's just not revolutionary.

Revolutions are rare. Perhaps only a handful of them have ever happened in any one particular field. But you simply will not ever go from Aristotelian physics to Newtonian physics to General Relativity by merely "synthesizing the data they were trained on", as the previous comment supposed.

Edit: I should also say something about experimentation. You can't do it from an armchair, which is all an LLM has access to (at present). Real people learn things all the time by conducting experiments in the world and observing the results, without necessarily working as formal scientists. Babies learn a lot by experimenting, for example. This is one particular avenue of new knowledge which is entirely separate from experience, education, memories, etc. because an experiment always has the potential to contradict all of that.

NewsaHackO 2 hours ago | parent | prev | next [-]

Yes, and the natural extension is that a lot of what people do day to day is not work-driven by intelligence; it is just reusing a known solution to a presented problem in a bespoke manner. However, this is something that AI excels at.

irishcoffee 2 hours ago | parent | prev | next [-]

The LLM was trained on 100% of humans, the 99% you’re scoffing at is feeding the LLM answers.

lateforwork 2 hours ago | parent [-]

100% (or close to it) of material AI trains on was human generated, but that doesn't mean 100% of humans are generating useful material for AI training.

coldtea an hour ago | parent [-]

Let's train one on just the expert written code and books then, and not the entirety of GitHub or Stack Overflow and such, and see how it fares...

jryan49 3 hours ago | parent | prev [-]

Yes... maybe not 99%...

antonvs 2 hours ago | parent | prev [-]

You could say the same thing about Chris Lattner. How did he advance the state of the art with Swift? It’s essentially just a subjective rearranging of deck chairs: “I like this but not that.” Someone had to explain to Lattner why it was a good idea to support tail recursion in LLVM, for example - something he would have already known if he had been trained differently. He regurgitates his training just like most of us do.

That might read like an insult to Lattner, but what I’m really pointing out is that we tend to hold AIs to a much higher standard than we do humans, because the real goal of such commentary is to attempt to dismiss a perceived competitive threat.