Remix.run Logo
lateforwork 3 hours ago

Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.

AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.

AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward. Humans can innovate with help from AI, but AI still requires human direction.

You can prod AI systems to think critically, but they tend to revert to the mean. When a conversation moves away from consensus thinking, you can feel the system pulling back toward the safe middle.

As Apple’s “Think Different” campaign in the late 90s put it: the people crazy enough to think they can change the world are the ones who do—the misfits, the rebels, the troublemakers, the round pegs in square holes, the ones who see things differently. AI is none of that. AI is a conformist. That is its strength, and that is its weakness.

[1] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...

bigstrat2003 an hour ago | parent | next [-]

> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1].

Well, of course. Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on.

lateforwork an hour ago | parent | next [-]

> don't have a shred of intelligence. ... They don't understand, only synthesize from the data they were trained on.

Couldn't you say that about 99% of humans too?

chongli 22 minutes ago | parent | next [-]

99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art. But it's a different 1% for every area of expertise! Add it all up and you get a lot more than 1% of humans contributing to the sum of knowledge.

And of course, if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more. Sure, much of this knowledge may not be widespread (it may be locked up within private institutions) but its impact can still be felt throughout the economy.

NewsaHackO 44 minutes ago | parent | prev | next [-]

Yes, and the natural extension is that a lot of what people do day to day is not work-driven by intelligence; it is just reusing a known solution to a presented problem in a bespoke manner. However, this is something that AI excels at.

irishcoffee an hour ago | parent | prev | next [-]

The LLM was trained on 100% of humans, the 99% you’re scoffing at is feeding the LLM answers.

lateforwork an hour ago | parent [-]

100% (or close to it) of material AI trains on was human generated, but that doesn't mean 100% of humans are generating useful material for AI training.

jryan49 an hour ago | parent | prev [-]

Yes... maybe not 99%...

antonvs an hour ago | parent | prev [-]

You could say the same thing about Chris Lattner. How did he advance the state of the art with Swift? It’s essentially just a subjective rearranging of deck chairs: “I like this but not that.” Someone had to explain to Lattner why it was a good idea to support tail recursion in LLVM, for example - something he would have already known if he had been trained differently. He regurgitates his training just like most of us do.

That might read like an insult to Lattner, but what I’m really pointing out is that we tend to hold AIs to a much higher standard than we do humans, because the real goal of such commentary is to attempt to dismiss a perceived competitive threat.

random3 12 minutes ago | parent | prev | next [-]

So AI won't surpass humans, because Chris Lattner can do better than a model than didn't exist two years ago?

bluGill an hour ago | parent | prev | next [-]

A week ago there was an artical about Donald Knuth asking an ai to prove something then unproved and it found the proof. I suppose it is possible that the great Knuth didn't know how to find this existing truth - but there is a reason we all doubted it (including me when I mentioned it there)

i have never written a c compiler yet I would bet money if you paid me to write one (it would take a few years at least) it wouldn't have any innovations as the space is already well covered. Where I'm different from other compilers is more likely a case of I did something stupid that someone who knows how to write a compiler wouldn't.

coffeefirst 26 minutes ago | parent | next [-]

So I would like to know how it found the proof. Because it’s much more likely to have been plucked from an obscure record where the author didn’t realize this was special than to have been estimated on the fly.

This makes LLMs incredibly powerful research tools, which can create the illusion of emergent capabilities.

lateforwork an hour ago | parent | prev | next [-]

> as the space is already well covered

The US patent commissioner in 1899 wanted to shutdown the patent office because "everything that can be invented has been invented." And yet, human ingenuity keeps proving otherwise.

bluGill an hour ago | parent | next [-]

There are lots of small innovations left. Only a few patents have ever been for revolutions. Small innovaions add up to big things.

appletrotter an hour ago | parent | prev [-]

This is apocraphyl :(

3836293648 an hour ago | parent | prev [-]

You could probably do it in a few days, C is not that hard to compile

bluGill an hour ago | parent | next [-]

Claude built an optimizer as well. (Not a great one) that takes a lot more. Yes I could lively brute force a C compiler that works much faster.

lateforwork an hour ago | parent | prev [-]

Right, and that was a design goal of C language... to be close to the machine.

thesz 2 hours ago | parent | prev | next [-]

  > ...generate answers near the center of existing thought.
This is right in the Wikipedia's article on universal approximation theorem [1].

[1] https://en.wikipedia.org/wiki/Universal_approximation_theore...

"n the field of machine learning, the universal approximation theorems (UATs) state that neural networks with a certain structure can, in principle, approximate any continuous function to any desired degree of accuracy. These theorems provide a mathematical justification for using neural networks, assuring researchers that a sufficiently large or deep network can model the complex, non-linear relationships often found in real-world data."

And then: "Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K}. The proof does not describe how the function would be extrapolated outside of the region."

NNs, LLMs included, are interpolators, not extrapolators.

And the region NN approximates within can be quite complex and not easily defined as "X:R^N drawn from N(c,s)^N" as SolidGoldMagiKarp [2] clearly shows.

[2] https://github.com/NiluK/SolidGoldMagikarp

fasterik an hour ago | parent [-]

It has been proven that recurrent neural networks are Turing complete [0]. So for every computable function, there is a neural network that computes it. That doesn't say anything about size or efficiency, but in principle this allows neural networks to simulate a wide range of intelligent and creative behavior, including the kind of extrapolation you're talking about.

[0] https://www.sciencedirect.com/science/article/pii/S002200008...

slopinthebag 2 hours ago | parent | prev | next [-]

Yeah I think he had a pretty sane take in that article:

>CCC shows that AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.

And also

> The most effective engineers will not compete with AI at producing code, but will learn to collaborate with it, by using AI to explore ideas faster, iterate more broadly, and focus human effort on direction and design. Lower barriers to implementation do not reduce the importance of engineers; instead, they elevate the importance of vision, judgment, and taste. When creation becomes easier, deciding what is worth creating becomes the harder problem. AI accelerates execution, but meaning, direction, and responsibility remain fundamentally human.

Animats 2 hours ago | parent | prev | next [-]

I think this article was on HN a few days ago.

peehole an hour ago | parent | prev [-]

LLMs still do forEach, it’s like wearing Tommy Hilfiger