资源描述:
《基于后悔值的多Agent冲突博弈强化学习模型》由会员上传分享,免费在线阅读,更多相关内容在学术论文-天天文库。
1、ISSN1000-9825,CODENRUXUEWE-mail:jos@iscas.ac.cnJournalofSoftware,Vol.19,No.11,November2008,pp.2957−2967http://www.jos.org.cnDOI:10.3724/SP.J.1001.2008.02957Tel/Fax:+86-10-62562563©2008byJournalofSoftware.Allrightsreserved.∗基于后悔值的多Agent冲突博弈强化学习模型+肖正,张世永(复旦大学计算机与信息技术系,上海200433)ReinforcementLearni
2、ngModelBasedonRegretforMulti-AgentConflictGames+XIAOZheng,ZHANGShi-Yong(DepartmentofComputerandInformationTechnology,FudanUniversity,Shanghai200433,China)+Correspondingauthor:E-mail:xiaozheng206@163.com,http://www.fudan.edu.cnXiaoZ,ZhangSY.Reinforcementlearningmodelbasedonregretformulti-agentco
3、nflictgames.JournalofSoftware,2008,19(11):2957−2967.http://www.jos.org.cn/1000-9825/19/2957.htmAbstract:Forconflictgame,arationalbutconservativeactionselectionmethodisinvestigated,namely,minimizingregretfunctionintheworstcase.Bythismethodthelossincurredpossiblyinfutureisthelowestunderthisverypo
4、licy,andNashequilibriummixedpolicyisobtainedwithoutinformationaboutotheragents.Basedonregret,areinforcementlearningmodelanditsalgorithmforconflictgameundermulti-agentcomplexenvironmentareputforward.Thismodelalsobuildsagents’beliefupdatingprocessontheconceptofcrossentropydistance,whichfurtheropt
5、imizesactionselectionpolicyforconflictgames.BasedonMarkovrepeatedgamemodel,thispaperdemonstratestheconvergencepropertyofthisalgorithm,andanalyzestherelationshipbetweenbeliefandoptimalpolicy.Additionally,comparedwithextendedQ-learningalgorithmunderMMDP(multi-agentmarkovdecisionprocess),thepropos
6、edalgorithmdecreasesthenumberofconflictsdramatically,enhancescoordinationamongagents,improvessystemperformance,andhelpstomaintainsystemstability.Keywords:Markovgame;reinforcementlearning;conflictgame;conflictresolving摘要:对于冲突博弈,研究了一种理性保守的行为选择方法,即最小化最坏情况下Agent的后悔值.在该方法下,Agent当前的行为策略在未来可能造成的损失最小,并
7、且在没有任何其他Agent信息的条件下,能够得到Nash均衡混合策略.基于后悔值提出了多Agent复杂环境下冲突博弈的强化学习模型以及算法实现.该模型中通过引入交叉熵距离建立信念更新过程,进一步优化了冲突博弈时的行为选择策略.基于Markov重复博弈模型验证了算法的收敛性,分析了信念与最优策略的关系.此外,与MMDP(multi-agentmarkovdecisionprocess)下Q学习扩展算法相比,该算法在很大程度上减少了冲突发生的次数,增强了Age