Remix.run Logo
measurablefunc a day ago

This is known as the data processing inequality. Non-invertible functions can not create more information than what is available in their inputs: https://blog.blackhc.net/2023/08/sdpi_fsvi/. Whatever arithmetic operations are involved in laundering the inputs by stripping original sources & references can not lead to novelty that wasn't already available in some combination of the inputs.

Neural networks can at best uncover latent correlations that were already available in the inputs. Expecting anything more is basically just wishful thinking.

xyzzy123 a day ago | parent | next [-]

Using this reasoning, would you argue that a new proof of a theorem adds no new information that was not present in the axioms, rules of inference and so on?

If so, I'm not sure it's a useful framing.

For novel writing, sure, I would not expect much truly interesting progress from LLMs without human input because fundamentally they are unable to have human experiences, and novels are a shadow or projection of that.

But in math – and a lot of programming – the "world" is chiefly symbolic. The whole game is searching the space for new and useful arrangements. You don’t need to create new information in an information-theoretic sense for that. Even for the non-symbolic side (say diagnosing a network issue) of computing, AIs can interact with things almost as directly as we can by running commands so they are not fundamentally disadvantaged in terms of "closing the loop" with reality or conducting experiments.

measurablefunc a day ago | parent [-]

Sound deductive rules of logic can not create novelty that exceeds the inherent limits of their foundational axiomatic assumptions. You can not expect novel results from neural networks that exceed the inherent information capacity of their training corpus & the inherent biases of the neural network (encoded by its architecture). So if the training corpus is semantically unsound & inconsistent then there is no reason to expect that it will produce logically sound & semantically coherent outputs (i.e. garbage inputs → garbage outputs).

xyzzy123 18 hours ago | parent [-]

Maybe? But it also seems like you are that you are not accounting for new information at inference time. Let's pretend I agree the LLM is a plagiarism machine that can produce no novelty in and of itself that didn't come from what it was trained on, and produces mostly garbage (I only half agree lol, and I think "novelty" is under-specified here).

When I apply that machine (with its giant pool of pirated knowledge) _to my inputs and context_ I can get results applicable to my modestly novel situation which is not in the training data. Perhaps the output is garbage. Naturally if my situation is way out of distribution I cannot expect very good results.

But I often don't care if the results are garbage some (or even most!) of the time if I have a way to ground-truth whether they are useful to me. This might be via running a compile, a test suite, a theorem prover or mk1 eyeball. Of course the name of the game is to get agents to do this themselves and this is now fairly standard practice.

measurablefunc 18 hours ago | parent [-]

I'm not here to convince you whether Markov chains are helpful for your use cases or not. I know from personal experience that even in cases where I have a logically constrained query I will receive completely nonsensical responses¹.

¹https://chatgpt.com/share/69367c7a-8258-8009-877c-b44b267a35...

jacquesm 17 hours ago | parent [-]

> Here is a correct, standard correction:

It does this all the time, but as often as not then outputs nonsense again, just different nonsense, and if you keep it running long enough it starts repeating previous errors (presumably because some sliding window is exhausted).

measurablefunc 16 hours ago | parent [-]

That's been my general experience and that was the most recent example. People keep forgetting that unless they can independently verify the outputs they are essentially paying OpenAI for the privilige of being very confidently gaslighted.

jacquesm 10 hours ago | parent [-]

It would be a really nice exercise - for which I unfortunately do not have the time - to have a non-trivial conversation with the best models of the day and then to rigorously fact-check every bit of output to determine the output quality. Judging from my own (probably not a representative sample) experience it would be a very meager showing.

I use AI as a means of last resort only now and then mostly as a source of inspiration rather than a direct tool aiming to solve an issue. And like that it has been useful on occasion, but it has at least as often been a tremendous waste of time.

nl 13 hours ago | parent | prev | next [-]

This is simply not true.

Modern LLMs are trained by reinforcement learning where they try to solve a coding problem and receive a reward if it succeeds.

Data Processing Inequalities (from your link) aren't relevant: the model is learning from the reinforcement signal, not from human-written code.

jacquesm 10 hours ago | parent [-]

Ok, then we can leave the training data out of the input, everybody happy.

cornel_io 19 hours ago | parent | prev [-]

Theoretical "proofs" of limitations like this are always unhelpful because they're too broad, and apply just as well to humans as they do to LLMs. The result is true but it doesn't actually apply any limitation that matters.

measurablefunc 19 hours ago | parent [-]

You're confused about what applies to people & what applies to formal systems. You will continue to be confused as long as you keep thinking formal results can be applied in informal contexts.