Catalytic evolution of cooperation in a population with behavioral bimodality DOI

Anhui Sheng,

Jing Zhang, Guozhong Zheng

et al.

Chaos An Interdisciplinary Journal of Nonlinear Science, Journal Year: 2024, Volume and Issue: 34(10)

Published: Oct. 1, 2024

The remarkable adaptability of humans in response to complex environments is often demonstrated by the context-dependent adoption different behavioral modes. However, existing game-theoretic studies mostly focus on single-mode assumption, and impact this multimodality evolution cooperation remains largely unknown. Here, we study how evolves a population with two Specifically, incorporate Q-learning Tit-for-Tat (TFT) rules into our toy model investigate mode mixture cooperation. While players aim maximize their accumulated payoffs, within TFT repeat what neighbors have done them. In structured mixing implementation where updating rule fixed for each individual, find that greatly promotes overall prevalence. promotion even more significant probabilistic mixing, randomly select one at step. Finally, robust when adaptively choose modes real-time comparison. all three scenarios, act as catalyzers turn be cooperative result drive whole highly cooperative. analysis Q-tables explains underlying mechanism promotion, which captures “psychological evolution” players’ minds. Our indicates variety non-negligible could crucial clarify emergence real world.

Language: Английский

Cooperation in public goods games: Leveraging other-regarding reinforcement learning on hypergraphs DOI

Bo-Ying Li,

Zhenyu Zhang, Guozhong Zheng

et al.

Physical review. E, Journal Year: 2025, Volume and Issue: 111(1)

Published: Jan. 9, 2025

Cooperation is a self-organized collective behavior. It plays significant role in the evolution of both ecosystems and human society. Reinforcement learning different from imitation learning, offering new perspective for exploring cooperation mechanisms. However, most existing studies with public goods game (PGG) employ self-regarding setup or are on pairwise interaction networks. Players real world, however, optimize their policies based not only histories but also coplayers, played group manner. In this work, we investigate PGG under other-regarding reinforcement evolutionary hypergraph by combining Q-learning algorithm framework, where other players' action history incorporated hypergraphs. Our results show that as synergy factor r[over ̂] increases, parameter interval divides into three distinct regions-the absence cooperation, medium high cooperation-accompanied two abrupt transitions level near ̂]_{1}^{*} ̂]_{2}^{*}, respectively. Interestingly, identify regular anticoordinated chessboard structures spatial pattern positively contribute to first transition adversely affect second. Furthermore, provide theoretical treatment an approximated reveal players long-sighted low exploration rate more likely reciprocate kindness each other, thus facilitating emergence cooperation. findings understanding information interactions commonplace.

Language: Английский

Citations

0

Reinforcement learning in spatial public goods games with environmental feedbacks DOI
Shaojie Lv, Jiaying Li, Changheng Zhao

et al.

Chaos Solitons & Fractals, Journal Year: 2025, Volume and Issue: 195, P. 116296 - 116296

Published: March 23, 2025

Language: Английский

Citations

0

Catalytic evolution of cooperation in a population with behavioral bimodality DOI

Anhui Sheng,

Jing Zhang, Guozhong Zheng

et al.

Chaos An Interdisciplinary Journal of Nonlinear Science, Journal Year: 2024, Volume and Issue: 34(10)

Published: Oct. 1, 2024

The remarkable adaptability of humans in response to complex environments is often demonstrated by the context-dependent adoption different behavioral modes. However, existing game-theoretic studies mostly focus on single-mode assumption, and impact this multimodality evolution cooperation remains largely unknown. Here, we study how evolves a population with two Specifically, incorporate Q-learning Tit-for-Tat (TFT) rules into our toy model investigate mode mixture cooperation. While players aim maximize their accumulated payoffs, within TFT repeat what neighbors have done them. In structured mixing implementation where updating rule fixed for each individual, find that greatly promotes overall prevalence. promotion even more significant probabilistic mixing, randomly select one at step. Finally, robust when adaptively choose modes real-time comparison. all three scenarios, act as catalyzers turn be cooperative result drive whole highly cooperative. analysis Q-tables explains underlying mechanism promotion, which captures “psychological evolution” players’ minds. Our indicates variety non-negligible could crucial clarify emergence real world.

Language: Английский

Citations

1