▲ | SkiFire13 2 days ago | |||||||
> The theoretical advance we're waiting for in LLMs is auditable determinism. Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change. > At that point, the utility of human-readable computer languages sort of goes out the door. Its utility is having a non-ambiguous language to describe your solution in and that you can audit for correctness. You'll never get this with an LLM because its very premise is using natural language, which is ambiguous. | ||||||||
▲ | JumpCrisscross a day ago | parent [-] | |||||||
> Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change What I'm suggesting is a way to lock the model and then be able to have it revert to that state to re-interpret a set of prompts deterministically. When exploring, it can still branch non-deterministically. But once you've found a solution that works, you want the degrees of freedom to be limited. > You'll never get this with an LLM because its very premise is using natural language, which is ambiguous That's the point of locking the model. You need the prompts and the interpreter. | ||||||||
|