Remix.run Logo
KoolKat23 3 days ago

Language and math are a world model of physical reality. You could not read a book and make sense of it if this were not true.

An apple falls to the ground because of? gravity.

In real life this is the answer, I'm very sure the pre-carved channel will also lead to gravity.

adamzwasserman 3 days ago | parent [-]

You're proving my point. You know the word 'gravity' appears in texts about falling apples. An LLM knows that too. But neither you nor the LLM discovered gravity by observing reality and creating new models. You both inherited a pre-existing linguistic map. That's my entire argument about why LLMs can't do Nobel Prize-level work.

KoolKat23 2 days ago | parent [-]

Well it depends. It doesn't have arms and legs so can't physically experiment in the real world, a human is currently a proxy for that, we can do it's bidding and feedback results though, so it's not really an issue.

Most of the time that data is already available to it and they merely need to a prove a thereom using existing historic data points and math.

For instance the Black-Scholes-Merton equation which won the Nobel economics prize was derived using preexisting mathematical concepts and mathematical principles. The application and validation relied on existing data.

adamzwasserman 2 days ago | parent [-]

The Black-Scholes-Merton equation wasn't derived by rearranging words about financial markets. It required understanding what options are (financial reality), recognizing a mathematical analogy to heat diffusion (physical reality), and validating the model against actual market behavior (empirical reality). At every step, the discoverers had to verify their linguistic/mathematical model against the territory.

LLMs only rearrange descriptions of discoveries. They can't recognize when their model contradicts reality because they never touch reality. That's not a solvable limitation. It's definitional.

We're clearly operating from different premises about what constitutes discovery versus recombination. I've made my case; you're welcome to the last word

KoolKat23 2 days ago | parent [-]

I understand your viewpoint.

LLM's these days have reasoning and can learn in context. They do touch reality, your feedback. It's also proven mathematically. Other people's scientific papers are critiqued and corrected as new feedback arrives.

This is no different to claude code bash testing and fixing it's own output errors recursively until the code works.

They already deal with unknown combinations all day, our prompting.

Yes it is brittle though. They are also not very intelligent yet.