| ▲ | hawest 4 days ago | |||||||||||||
Super interesting, thank you for sharing! I have published some research on using LLMs for mediation here: https://arxiv.org/abs/2307.16732 and https://arxiv.org/abs/2410.07053 These papers describe the LLMediator, a platform that uses LLMs to: a) ensure a discussion maintains a positive tone by flagging and offering reformulated versions of messages that may derail the conversation b) suggest intervention messages that the mediator can use to intervene in the discussion and guide the parties toward a positive outcome. Overall, LLMs seem to be very good at these tasks, and even compared favourably to human-written interventions. Very excited about the potential of LLMs to lower the barrier to mediation, as it has a lot of potential to resolve disputes in a positive and collaborative manner. | ||||||||||||||
| ▲ | sanity 4 days ago | parent | next [-] | |||||||||||||
Thank you for sharing these. This feels complementary to my approach. Your papers seem focused on tone, interventions, and guiding the conversation. My approach is more about trying to infer each party’s preferences and then search for agreements that both would accept. I think LLMs are strong at both layers, but they’re quite different problems. One is helping people communicate better, the other is trying to actually compute outcomes given what each side cares about. | ||||||||||||||
| ▲ | harvey9 4 days ago | parent | prev | next [-] | |||||||||||||
Too many chatbots maintain a relentlessly 'positive tone' anyway, and sometimes a negative situation calls for honestly negative tones. | ||||||||||||||
| ||||||||||||||
| ▲ | 4 days ago | parent | prev [-] | |||||||||||||
| [deleted] | ||||||||||||||