| ▲ | AdieuToLogic 17 hours ago | |
> This is an area I'm very interested in. Do you have a particular application in mind? (I'm guessing the recipe example is just illustrate the general principle.) You are right in identifying the recipe example as being illustrative and intentionally simple. A more realistic example of using constraint programming techniques with LLMs is:
In this exploration, the list of "MUST/MUST NOT" constraints were iteratively discovered (4 iterations) and at least the last three are reusable when the task involves generating shell scripts.Where this approach originates is in attempting to limit LLM token generation variance by minimizing use of English vocabulary and sentence structure expressivity such that document generation has a higher probability of being repeatable. The epiphany I experienced was that by interacting with LLMs as a "black box" whose results can only be influenced, and not anthropomorphizing them, the natural way to do so is to leverage their NLP capabilities to produce restrictions (search tree pruning) for a declarative query (initial search space). | ||
| ▲ | aix1 13 hours ago | parent [-] | |
If one goal is to reduce the variance of output, couldn't this be done by controlling the decoding temperature? Another related technique is constrained decoding, whether the LLM sampler only considers tokens allowed by a certain formal grammar. This could be applicable for your "quotes within comments" requirements. Both techniques clearly require code or hyperparameter changes to the machinery that drives the LLM. What's missing is the ability to express these, in natural language, directly to the LLM and have it comply. The angle I was coming from was whether one could use a constraint satisfaction solver, but I don't see how that would help for your example. | ||