| ▲ | Der_Einzige 9 hours ago |
| Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed". If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!! |
|
| ▲ | hnlmorg 7 hours ago | parent | next [-] |
| Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will. LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem. |
| |
| ▲ | Terr_ 6 hours ago | parent | next [-] | | To put it another way, a high-temperature mad-libs machine will write a very unusual story, but that isn't necessarily the same as a clever story. | | |
| ▲ | balamatom 4 hours ago | parent [-] | | So why is this "temperature" not on, like, a rotary encoder? So you can just, like, tweak it when it's working against your intent in either direction? |
| |
| ▲ | bob1029 5 hours ago | parent | prev [-] | | High temperature seems fine for my coding uses on GPT5.2. Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time. I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops. |
|
|
| ▲ | adevilinyc 9 hours ago | parent | prev [-] |
| How do you configure LLM température in coding agents, e.g. opencode? |
| |
| ▲ | kabr 8 hours ago | parent | next [-] | | https://opencode.ai/docs/agents/#temperature set it in your opencode.json | |
| ▲ | Der_Einzige 8 hours ago | parent | prev [-] | | You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc. Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111 | | |
|