| ▲ | quotemstr 7 hours ago | |||||||
Cool. I agree (consistent with your GOTO analogy) that imposing structure on the model (or a human) can constrain the search space and lead to better choosing given a fixed decision budget. > deterministic primitives Are agent-map and LLM-map the only two options you've given the model for recursive invocations? No higher-level, er, reduction operators to augment the map primitives? | ||||||||
| ▲ | belisarius222 7 hours ago | parent [-] | |||||||
Hi, I'm the other author on this paper. You've asked a good question. I had originally planned on writing an agentic_reduce operator to complement the agentic_map operator, but the more I thought about it, the more I realized I couldn't come up with a use case for it that wasn't contrived. Instead, having the main agent write scripts that perform aggregations on the result of an agentic_map or llm_map call made a lot more sense. It's quite possible that's wrong. If so, I would write llm_reduce like this: it would spawn a sub-task for every pair of elements in the list, which would call an LLM with a prompt telling it how to combine the two elements into one. The output type of the reduce operation would need to be the same as the input type, just like in normal map/reduce. This allows for a tree of operations to be performed, where the reduction is run log(n) times, resulting in a single value. That value should probably be loaded into the LCM database by default, rather than putting it directly into the model's context, to protect the invariant that the model should be able to string together arbitrarily long sequences of maps and reduces without filling up its own context. I don't think this would be hard to write. It would reuse the same database and parallelism machinery that llm_map and agentic_map use. | ||||||||
| ||||||||