2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT) 2014
DOI: 10.1109/wi-iat.2014.171
|View full text |Cite
|
Sign up to set email alerts
|

Emergence of Conventions for Efficiently Resolving Conflicts in Complex Networks

Abstract: We investigated the emergence of conventions for conflict resolutions in agent networks with various structures through pairwise reinforcement learning. Whereas coordinated agents encounter conflict situations in the course of actions, their resolutions are complex and computationally expensive due to mutual analysis of subsequent actions by both agents and communication costs of the interactions. Norms and conventions are expected to reduce these costs by regulating agent actions in recurrent conflicts. This … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…Clearly, the scale-free network is the most efficient network for achieving high level of consensus compared with the other two networks. Previous studies have shown that this effect is due to the small graph diameter of scale-free networks 11 39 40 41 .…”
Section: Resultsmentioning
confidence: 97%
“…Clearly, the scale-free network is the most efficient network for achieving high level of consensus compared with the other two networks. Previous studies have shown that this effect is due to the small graph diameter of scale-free networks 11 39 40 41 .…”
Section: Resultsmentioning
confidence: 97%
“…operational constraints. This happens in contrast to other approaches where agents learn by iteratively interacting with a single opponent from the population [32] [28], or by playing repeatedly with randomly chosen neighbours [4].…”
Section: Related Workmentioning
confidence: 97%
“…RL is suitably used for learning through trial and error by mapping situations that lead to the discovery of actions that gain the most reward (exploration) and executing actions to maximize a numerical reward signal (exploitation) (Sutton & Barto, 2018). Within the context of NorMAS, RL is a method to invoke convention emergence or norm emergence (Frantz et al, 2014(Frantz et al, , 2015Hosseini & Ulieru, 2012;Mashayekhi et al, 2022;Neufeld et al, 2021;Pujol et al, 2005;Riveret et al, 2014aRiveret et al, , 2014bSen & Airiau, 2007;Shoham & Tennenholtz, 1992, 1997Sugawara, 2014;Yu et al, 2013Yu et al, , 2014Yu et al, , 2015Yu et al, , 2017. The current de-facto standard algorithm used in past studies to induce norm emergence using RL is QL (Sutton & Barto, 2018;Watkins & Dayan, 1992), a model-free RL algorithm, in the case of NorMAS through social learning (learning from interactions with other agents).…”
Section: Norm Emergencementioning
confidence: 99%