Remix.run Logo
Der_Einzige 9 hours ago

To be fair, there is "real harm" from constraining LLM outputs related to, for example, forcing lipograms or the letter "E" and a model responding with misspellings of words (deleted E) rather than words that don't actually have the letter "E" at all. This is why some authors propose special decoders to fix that diversity problem. See this paper and most of what it cites around it for examples of this: https://arxiv.org/abs/2410.01103

This is independent from a "quality" or "reasoning" problem which simply does not exist/happen when using structured generation.

Edit (to respond):

I am claiming that there is no harm to reasoning, not claiming that CoT reasoning before structured generation isn't happening.

crystal_revenge 8 hours ago | parent [-]

> "reasoning" problem which simply does not exist/happen when using structured generation

The first article demonstrates exactly how to implement structured generation with CoT. Do you mean “reasoning” other than traditional CoT (like DeepSeek)? I’ll have to look for an reference but I recall the Outlines team also handling this latter case.