| ▲ | gertlabs 2 days ago | |||||||||||||||||||||||||
Self organizing systems is an area of research to which I think LLMs will contribute immensely. But as of now, even newer AI models are not particularly insightful. I'm always surprised by how suboptimal near-frontier LLMs are at collaborating in some of the easier cooperative environments on my benchmarking and RL platform. For example, check out a replay of consensus grid here: https://gertlabs.com/spectate | ||||||||||||||||||||||||||
| ▲ | AntiUSAbah 2 days ago | parent | next [-] | |||||||||||||||||||||||||
While interesting, its not clear to me with just looking at concensus grid how they are prompted. Do you tell them to think and coordinate the next step through some type of sync/talking mechanism or is it turn by turn? I suspect turn by turn as it is similiar to other experiements and in this case, it wouldn't work because they wouldn't have a certain amount of time to think about the next step together? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | dataviz1000 a day ago | parent | prev [-] | |||||||||||||||||||||||||
Have you tried recursive self-reflective agents? The agent makes a copy of itself in /tmp/. Runs. Evaluates. Updates itself. Makes a copy of itself. Runs. Evaluates. Updates itself. Makes a ...... you get the idea. They will not stop if the recursion is given a hard to meet termination condition. Also, if it can cheat to solve the termination condition it will. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||