2018
DOI: 10.1016/j.artint.2018.03.001
|View full text |Cite
|
Sign up to set email alerts
|

Arguing about informant credibility in open multi-agent systems

Abstract: This paper proposes the use of an argumentation framework with recursive attacks to address a trust model in a collaborative open multi-agent system. Our approach is focused on scenarios where agents share information about the credibility (informational trust) they have assigned to their peers. We will represent informants' credibility through credibility objects which will include not only trust information but also the informant source. This leads to a recursive setting where the reliability of certain cred… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 27 publications
0
8
0
Order By: Relevance
“…For example, in swarm optimization, agents are categorized and distinguished based on the quality of their solutions in the previous iteration [29]. In the IoTs systems, some agents and elements may be selfinterested, with a lack of a global perspective of the system [30], and the potential to inject unreliable and misleading information into the system, which needs extra considerations during modeling and evaluation states [31].…”
Section: Entitiesmentioning
confidence: 99%
“…For example, in swarm optimization, agents are categorized and distinguished based on the quality of their solutions in the previous iteration [29]. In the IoTs systems, some agents and elements may be selfinterested, with a lack of a global perspective of the system [30], and the potential to inject unreliable and misleading information into the system, which needs extra considerations during modeling and evaluation states [31].…”
Section: Entitiesmentioning
confidence: 99%
“…The consistency question from [46] together with the proposal of [25,39,40] could be used in our formalism to add an implementation for the operator reject-expert-opinion defined in Section 5. The authors consider that agents can obtain information from multiple informants, and that the attribution of trust to a particular informant can be higher than the trust attributed to others.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, the question to be investigated is how members of a research group should update on the receipt of new evidence in a social setting, where they also have access to relevant beliefs of (some or all of) their colleagues, supposing the group wants to strike the best balance between speed (getting at the truth fast) and accuracy (minimizing error rates). The main methodological tool to be used is that of computational agent-based modelling, which has become a central topic in artificial intelligence (Shoham, Powers, & Grenager, 2007;Tamargo, Garcia, Falappa, & Simari, 2014;Nunes & Antunes, 2015;Gottifredi et al, 2018). Specifically, we build on the well-known Hegselmann-Krause (HK) model for studying opinion dynamics in groups of interacting agents focused on a common research question (Hegselmann & Krause, 2002, 2005, 2009; for related models, see Deffuant et al, 2000;Dittmer, 2001;Weisbuch et al, 2002;Pluchino, Latora, & Rapisarda, 2006;Semeshenko, Gordon, & Nadal, 2008;De Langhe & Greiff, 2010).…”
Section: Introductionmentioning
confidence: 99%
“…In this model, and also in Douven and Wenmackers' extension of it to be discussed below, all agents in the BCI are treated on a par. If information about the credibility of these agents were available, one might plausibly wish to weigh them differently, for instance, in the manner ofTamargo et al (2014) andGottifredi et al (2018).…”
mentioning
confidence: 99%