Remix.run Logo
ManuelKiessling 8 hours ago

(I‘m saying this as someone who uses AI for coding a lot and mostly love it) Yeah, but is that really the same? Compilers work deterministically — if it works once, it will work always. LLMs are a different story for now.

betenoire 8 hours ago | parent | next [-]

Said another way, compilers are a translation of existing formal code. Compilers don't add features, they don't create algorithms (unrolling, etc., notwithstanding), they are another expression of the same encoded solution.

LLMs are nothing like that

cortesoft 7 hours ago | parent [-]

LLMs are just translating text into output, too, and are running on deterministic computers like every other bit of code we run. They aren't magic.

It is just the scope that makes it appear non-deterministic to a human looking at it, and it is large enough to be impossible for a human to follow the entire deterministic chain, but that doesn't mean it isn't in the end a function that translates input data into output data in a deterministic way.

betenoire 6 hours ago | parent [-]

just text !== syntactically correct code that solves a defined problem

There is a world of difference between translation and generation. It's even in the name: generative AI. I didn't say anything about magic.

cortesoft 7 hours ago | parent | prev [-]

LLMs are deterministic, too. I know there is randomness in the choosing tokens, but that randomness is derived from a random seed that can be repeated.

Supermancho 5 hours ago | parent | next [-]

Only if the seed is known. Determinism is often predicated on perfect information. Many programs do not have that. Their operations cannot be reproduced practically. The difference between saying deterministic and non-deterministic is contextual based on if you are concerned with theory or practicality.

lelanthran 7 hours ago | parent | prev [-]

If I understand your argument, you're saying that models can be deterministic, right?

Care to point to any that are set up to be deterministic?

Did you ever stop to think about why no one can get any use out of a model with temp set to zero?

mrob 5 hours ago | parent | next [-]

llama.cpp is deterministic when run with a specified PRNG seed, at least when running on CPU without caching. This is true regardless of temperature. But when people say "non-deterministic", they really mean something closer to "chaotic", i.e. the output can vary greatly with small changes to input, and there is no reliable way to predict when this will happen without running the full calculation. This is very different behavior from traditional compilers.

cortesoft 6 hours ago | parent | prev [-]

No, LLMs ARE deterministic, just like all computer programs are.

I get why that is in practice different then the manner in which compilers are deterministic, but my point is the difference isnt because of determinism.

betenoire 4 hours ago | parent [-]

I think you are misunderstanding the term "deterministic". Running on deterministic hardware does not mean an algorithm is deterministic.

Create a program that reads from /dev/random (not urandom). It's not determistic.

cortesoft 3 hours ago | parent [-]

Fair, although you can absolutely use local LLMs in a deterministic way (by using fixed seeds for the random number generation), and my point is that even if you did that with your LLM, it wouldn't change the feeling someone has about not being able to reason out what was happening.

In other words, it isn't the random number part of LLMs that make them seem like a black box and unpredictable, but rather the complexity of the underlying model. Even if you ran it in a deterministic way, I don't think people would suddenly feel more confident about the outputted code.