Remix.run Logo
ashwinnair99 4 hours ago

What does "deterministic silence" even mean here? Genuinely curious before reading.

nextaccountic 3 hours ago | parent | next [-]

The model reliably outputs nothing when prompted to embody the void.

Anyway later they concede that it's not 100% deterministic, because

> Temperature 0 non-determinism. While all confirmatory results were 30/30, known floating-point non-determinism exists at temperature 0 in both APIs. One control concept (thunder) showed 1/30 void on GPT, demonstrating marginal non-determinism.

Actually FP non-determinism affects runs between different machines giving different output. But in the same machine, FP is fully deterministic. (it can be made to be cross-platform deterministic with some performance penalty in at least some machines)

What makes computers non-deterministic here is concurrency. Concurrent code can interleave differently at each run. However it is possible to build LLMs that are 100% deterministic [0] (you can make them deterministic if those interleavings have the same results), it's just that people generally don't do that.

[0] for example, fabrice bellard's ts_zip https://bellard.org/ts_zip/ uses a llm to compress text. It would not be able to decompress the text losslessly if it weren't fully deterministic

charcircuit 3 hours ago | parent | prev [-]

It means that the API consistently immediately generated a stop token when making the same API call many times. The API call sets the temperature to 0 (the OpenAI documentation is not clear if gpt 5.2 can even have its temperature set to 0) which makes sampling deterministic.

embedding-shape 2 hours ago | parent [-]

> to 0 (the OpenAI documentation is not clear if gpt 5.2 can even have its temperature set to 0)

I think for the models that any value but 1.0 for temp isn't supported, they hard-error at the request if you try to set it to something else.