| ▲ | david3289 3 hours ago | |||||||
This is a really interesting direction. RTS games are a much better testbed for agent capability than most static benchmarks because they combine partial observability, long-term planning, resource management, and real-time adaptation. It reminds me a bit of OpenAI Five — not just because it played a complex game, but because the real value wasn’t “AI plays Dota,” it was observing how coordination, strategy formation, and adaptation emerged under competitive pressure. A controlled RTS environment like this feels like a lightweight, reproducible version of that idea. What I especially like here is that it lowers the barrier for experimentation. If researchers and hobbyists can plug different models into the same competitive sandbox, we might start seeing meaningful AI-vs-AI evaluations beyond static leaderboards. Competitive dynamics often expose weaknesses much faster than isolated benchmarks do. Curious whether you’re planning to support self-play training loops or if the focus is primarily on inference-time agents? | ||||||||
| ▲ | degenerate 2 hours ago | parent | next [-] | |||||||
You would likely be interested in the Starcraft BWAPI: https://www.starcraftai.com You can watch the matche videos from training runs: https://www.youtube.com/@Sscaitournament/videos I don't think BWAPI has ever integrated modern AI models, but I haven't followed its progress in several years. | ||||||||
| ||||||||
| ▲ | __cayenne__ an hour ago | parent | prev | next [-] | |||||||
Very interested in self-play training loops, but I do like codegen as an abstraction layer. I am planning to make it available as an RL environment at some point | ||||||||
| ▲ | dmos62 3 hours ago | parent | prev [-] | |||||||
> partial observability, long-term planning, resource management, and real-time adaptation Note, this project doesn't have that best I can tell? Its two static AI scripts having a go. LLMs generate the scripts and they are aware of past "results", but I'm not sure what that means. | ||||||||