当前位置: X-MOL 学术Appl. Math. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Long homogeneous payoff records with the latest strategy promotes the cooperation
Applied Mathematics and Computation ( IF 4 ) Pub Date : 2024-05-02 , DOI: 10.1016/j.amc.2024.128786
Fei Mo , Wenchen Han

In this study, we studied the fraction of cooperators in the public goods game, taking into account the memory effect that affects the strategy updating. Unlike previous studies where an agent learned the opponent's last strategy based on their last payoffs, agents with memory in this study choose to cooperate according to opponents' effective strategies by comparing their effective payoffs based on payoffs and strategies in their memories. The effective payoff of an agent is the weighted average of previous strategies in the agent's memory. The weight is the decay measuring the significance of previous strategies and former payoffs are less significant than latter ones upon the future strategy. And it is the same with the effective strategy. By this means, when the effective payoff and the effective strategy share a same memory length and a same set of decay, the numerical simulation shows increasing the memory length or a homogeneous decay promotes cooperation among agents. However, it is a surprise that the effective payoff and the effective strategy have opposite effects. Homogeneous payoff weights lead to a higher fraction of cooperators, while heterogeneous strategy weights favors the cooperation, especially when agents only consider the latest strategy. Comparing the effect of memorizing payoffs and strategies, the effect of memorizing payoffs plays a dominant role. Furthermore, when the total memory length is limited, agents should memorize as many historical payoffs as possible. In addition the qualitative result above is independent of the rational noise.

中文翻译:


长期同质收益记录与最新策略促进合作



在本研究中,我们研究了公共物品博弈中合作者的比例,并考虑了影响策略更新的记忆效应。与之前的研究不同,智能体根据上次的收益来学习对手的最后策略,而本研究中具有记忆的智能体通过比较基于收益和记忆中的策略的有效收益,选择根据对手的有效策略进行合作。智能体的有效收益是智能体记忆中先前策略的加权平均值。权重是衡量先前策略重要性的衰减,并且对于未来策略而言,先前的收益不如后面的收益显着。有效策略也是如此。通过这种方式,当有效收益和有效策略共享相同的记忆长度和相同的衰减集时,数值模拟表明增加记忆长度或均匀衰减可以促进代理之间的合作。然而,令人惊讶的是,有效回报和有效策略却产生相反的效果。同质的支付权重会导致更高比例的合作者,而异质的策略权重则有利于合作,特别是当代理只考虑最新策略时。比较记忆收益和策略的效果,记忆收益的效果占主导地位。此外,当总记忆长度有限时,智能体应该记住尽可能多的历史收益。此外,上述定性结果与理性噪声无关。
更新日期:2024-05-02
down
wechat
bug