Remix.run Logo
qwertox 7 hours ago

> If LLMs give only one answer, no matter what nuances are at play, that sounds like they are failing to judge and instead are diminishing the thought process down to black-and-white thinking.

You can have a team of agents exchange views and maybe the protocol would even allow for settling the cases automatically. The more agents you have, the higher the nuances.

jagged-chisel 7 hours ago | parent | next [-]

Presumably all these agents would have been trained on different data, with different viewpoints? Otherwise, what makes them different enough from each other that such a "conversation" would matter?

qwertox 7 hours ago | parent [-]

Different skills or plugins, different views and different tools for the analysis of the same object. Then the debate starts.

viraptor 6 hours ago | parent | prev [-]

Then you'd need to provide them with access to the law, previous cases, to the news, to various data sources. And you'd have to decide how much each of those sources of information matter. And at that point, you've got people making the decision again instead of the ai in practice.

And then there's the question of the model used. Turns out I've got preferences for which model I'd rather be judged by, and it's not Grok for example...