Remix.run Logo
insane_dreamer 2 hours ago

> I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context.

A human can be in the loop if the human is exactly executing the orders of the AI. It's still the AI making all the decisions, which is the purpose of the experiment - not to see whether agents can handle every interaction necessary to run a business (pick up the phone and place orders, etc.). That's also why Luna hired humans.

bfeynman 2 hours ago | parent [-]

that is ... not correct? This is classic example of data leakage, the yes/no things are signals feeding back to the model influencing (and here, basically guiding) future decisions.

insane_dreamer an hour ago | parent [-]

It's not data leakage.

If the experiment is to see how the AI behaves on its own, then of course it needs to know the outcomes of its decisions (either automatically, or fed to it by a human), which of course influence its next decisions. This is providing the AI with retained memory, which is essential to the experiment. It's similar to an AI writing code which it then runs and parses the logs to see the outcome and make improvements to it. (It is not _retrained_ on those outcomes, and neither is that the case here; but it can reference them in stored memory.)