2009 International Conference on Computational Science and Engineering 2009
DOI: 10.1109/cse.2009.297
|View full text |Cite
|
Sign up to set email alerts
|

A Trust-Based Multiagent System

Abstract: Cooperative agent systems often do not account for sneaky agents who are willing to cooperate when the stakes are low and take selfish, greedy actions when the rewards rise. Trust modeling often focuses on identifying the appropriate trust level for the other agents in the environment and then using these levels to determine how to interact with each agent. Adding trust to an interactive partially observable Markov decision process (I-POMDP) allows trust levels to be continuously monitored and corrected enabli… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…Such scalability is critically needed because I-POMDPs cover an important portion of the multiagent planning problem space (Doshi, 2012;Durfee & Zilberstein, 2013). Applications in diverse areas such as security (Seymour & Peterson, 2009;Ng, Meyers, Boakye, & Nitao, 2010), robotics (Woodward & Wood, 2012;Wang, 2013), ad hoc teams (Chandrasekaran, Doshi, Zeng, & Chen, 2014) and human behavior modeling (Doshi, Qu, Goodie, & Young, 2010;Wunder, Kaisers, Yaros, & Littman, 2011;Hula, Montague, & Dayan, 2015) testify to the wide appeal of I-POMDPs while motivating better scalability.…”
Section: Introductionmentioning
confidence: 99%
“…Such scalability is critically needed because I-POMDPs cover an important portion of the multiagent planning problem space (Doshi, 2012;Durfee & Zilberstein, 2013). Applications in diverse areas such as security (Seymour & Peterson, 2009;Ng, Meyers, Boakye, & Nitao, 2010), robotics (Woodward & Wood, 2012;Wang, 2013), ad hoc teams (Chandrasekaran, Doshi, Zeng, & Chen, 2014) and human behavior modeling (Doshi, Qu, Goodie, & Young, 2010;Wunder, Kaisers, Yaros, & Littman, 2011;Hula, Montague, & Dayan, 2015) testify to the wide appeal of I-POMDPs while motivating better scalability.…”
Section: Introductionmentioning
confidence: 99%
“…An extension of HMMs, partially observable Markov decision processes (POMDPs), provide a framework that does account for these actions and enables the design and synthesis of a policy to choose optimal actions based on a desired reward function. POMDPs have been used in HMI for automatically generating robot explanations to improve performance [59] and estimating trust in agent-agent interactions [52]. Recent work has demonstrated the use of a POMDP model with human trust dynamics to improve human-robot performance [3][4][5]11].…”
Section: Related Workmentioning
confidence: 99%
“…Recent work has demonstrated the use of a POMDP model with human trust dynamics to improve human-robot performance [52]. POMDPs have also been used in HMI for automatically generating robot explanations to improve performance [28]- [30] and estimating trust in agent-agent interactions [53]. For example, the POMDP model in [28]- [30] is used to simulate only the dynamics of the robot's decisions and generates recommendations of different transparency levels.…”
Section: Modeling Human Trust and Workloadmentioning
confidence: 99%