Remix.run Logo
AdieuToLogic a day ago

Those papers are really interesting, thanks for sharing them!

Do you happen to know of any research papers which explore constraint programming techniques wrt LLMs prompts?

For example:

  Create a chicken noodle soup recipe.

  The recipe must satisfy all of the following:

    - must not use more than 10 ingredients
    - must take less than 30 minutes to prepare
    - ...
aix1 19 hours ago | parent | next [-]

This is an area I'm very interested in. Do you have a particular application in mind? (I'm guessing the recipe example is just illustrate the general principle.)

AdieuToLogic 15 minutes ago | parent [-]

> This is an area I'm very interested in. Do you have a particular application in mind? (I'm guessing the recipe example is just illustrate the general principle.)

You are right in identifying the recipe example as being illustrative and intentionally simple. A more realistic example of using constraint programming techniques with LLMs is:

  # Role
  You are an expert Unix shell programmer who comments their code and organizes their code using shell programming best practices.

  # Task
  Create a bash shell script which reads from standard input text in Markdown format and prints all embedded hyperlink URL's.

  The script requirements are:

    - MUST exclude all inline code elements
    - MUST exclude all fenced code blocks
    - MUST print all hyperlink URL's
    - MUST NOT print hyperlink label
    - MUST NOT use Perl compatible regular expressions
    - MUST NOT use double quotes within comments
    - MUST NOT use single quotes within comments
  
In this exploration, the list of "MUST/MUST NOT" constraints were iteratively discovered (4 iterations) and at least the last three are reusable when the task involves generating shell scripts.

Where this approach originates is in attempting to limit LLM token generation variance by minimizing use of English vocabulary and sentence structure expressivity such that document generation has a higher probability of being repeatable. The epiphany I experienced was that by interacting with LLMs as a "black box" whose results can only be influenced, and not anthropomorphizing them, the natural way to do so is to leverage their NLP capabilities to produce restrictions (search tree pruning) for a declarative query (initial search space).

Aeolun 10 hours ago | parent | prev | next [-]

Anything involving numbers, or conditions like ‘less than 30 minutes’ is going to be really hard.

cess11 19 hours ago | parent | prev | next [-]

I suspect LLM-like technologies will only rarely back out of contradictory or otherwise unsatisfiable constraints, so it might require intermediate steps where LLM:s formalise the problem in some SAT, SMT or Prolog tool and report back about it.

llmslave2 a day ago | parent | prev [-]

I've seen some interesting work going the other way, having LLMs generate constraint solvers (or whatever the term is) in prolog and then feeding input to that. I can't remember the link but could be worthwhile searching for that.