| ▲ | cnnlives78 3 days ago | |
The LLMs most of us are using have some element of randomness to every token selected, which is non-deterministic. You can try to attempt to corral that, but statistically, with enough iteration, it may provide nonsense, unintentional, dangerous, opposite solutions/answers/action, even if you have system instructions defining otherwise and a series of LLMs checking themselves. Be sure that you fully understand this. Even if you could make it fully deterministic, it would be deterministic based on the model and state, and you’ll surely be updating those. It amazes me how little people know about what they’re using. | ||