Sci Rep. 2025 Aug 11;15(1):29376. doi: 10.1038/s41598-025-15057-x.
ABSTRACT
This study investigates emergent behaviors in multi-agent pursuit-evasion games within a bounded 2D grid world, where both pursuers and evaders employ multi-agent reinforcement learning (MARL) algorithms to develop adaptive strategies. We define six fundamental pursuit actions-flank, engage, ambush, drive, chase, and intercept-which combine to form 21 types of composite actions during two-pursuer coordination. After training with MARL algorithms, pursuers achieved a 99.9% success rate in 1,000 randomized pursuit-evasion trials, demonstrating the effectiveness of the learned strategies. To systematically identify and measure emergent behaviors, we propose a K-means-based clustering methodology that analyzes the trajectory evolution of both pursuers and evaders. By treating the full set of game trajectories as statistical samples, this approach enables the detection of distinct behavioral patterns and cooperative strategies. Through analysis, we uncover emergent behaviors such as lazy pursuit, where one pursuer minimizes effort while complementing the other’s actions, and serpentine movement, characterized by alternating drive and intercept actions. We identify four key cooperative pursuit strategies, statistically analyzing their occurrence frequency and corresponding trajectory characteristics: serpentine corner encirclement, stepwise corner approach, same-side edge confinement, and pincer flank attack. These findings provide significant insights into the mechanisms of behavioral emergence and the optimization of cooperative strategies in multi-agent games.
PMID:40789927 | DOI:10.1038/s41598-025-15057-x