Remix.run Logo
carbocation a day ago

One thing that is confusing about this write-up is that "DeepConf-low" is only mentioned once and in a screenshot, but it seems to outperform DeepConf-high for several tasks. I guess I'll need to read the underlying paper, but that seems troublesome.

swores a day ago | parent | next [-]

Copied from the paper (halfway down page 6: https://arxiv.org/pdf/2508.15260 )

> "Specifically, DeepConf-low uses top η= 10% (corresponding to the 90th percentile) and DeepConf-high uses top η = 90% (corresponding to the 10th percentile) uniformly across all settings. This threshold ensures that during online generation, traces are terminated when their confidence falls below the level that retains the top η% highest-confidence traces from the warmup phase."

I'm not sure if I'm parsing it right, but are they using "low" and "high" as descriptors of the number used as the %, meaning that the "low" 10 cuts anything outside the best 10%, while the "high" 90 leaves the best 90% ie high is less selective than low?

carbocation a day ago | parent [-]

Thanks, this is a helpful breakdown.

cubefox a day ago | parent | prev [-]

It's likely confusing because it was written by an LLM.

swores a day ago | parent [-]

The confusing thing mentioned by the person you replied to is the data and naming from the actual paper, so no it's nothing to do with how the article was written. (Unless you're suggesting that the research paper was also written by an LLM, but I don't think you are?)

cubefox a day ago | parent [-]

> The confusing thing mentioned by the person you replied to is the data and naming from the actual paper

No I think the confusing thing is that the LLM-written blog post doesn't adequately explain the screenshot.