2022
DOI: 10.1177/1071181322661358
|View full text |Cite
|
Sign up to set email alerts
|

Designing for Mutually Beneficial Decision Making in Human-Agent Teaming

Abstract: This paper presents a joint decision-making framework between human and artificial intelligent agents in an effort to create a cohesive team uninhibited by each other’s actions. Based on the well-known Recognition Primed Decision-Making Model, our framework expands upon RPD’s single decision maker to be more Human-Agent Teaming (HAT) oriented. Specifically, our framework includes three layers of shared cognition to ensure both a consistent level of transparency between members and the efficient completion of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 24 publications
(43 reference statements)
0
2
0
Order By: Relevance
“…With this pairing of humans and AI teammates harmoniously balancing each other in their talents, researchers have investigated methods to improve the relationship between these types of teammates. One such factor is in building a comprehensible platform of communication that is bi-directional for both humans and AI (Mallick, Sawant, McNeese, & Chalil Madathil, 2022). In further detail, humans need to understand how AI makes a decision and how they are able to communicate that information, as well as the AI needs to understand how and why the humans are making the decisions they do so they can learn from it and gain a better understanding of the environment.…”
Section: Background Factors Of Effective Human-agent Teamsmentioning
confidence: 99%
“…With this pairing of humans and AI teammates harmoniously balancing each other in their talents, researchers have investigated methods to improve the relationship between these types of teammates. One such factor is in building a comprehensible platform of communication that is bi-directional for both humans and AI (Mallick, Sawant, McNeese, & Chalil Madathil, 2022). In further detail, humans need to understand how AI makes a decision and how they are able to communicate that information, as well as the AI needs to understand how and why the humans are making the decisions they do so they can learn from it and gain a better understanding of the environment.…”
Section: Background Factors Of Effective Human-agent Teamsmentioning
confidence: 99%
“…Teammates allow themselves to repetitively simulate the appropriate teamwork behaviors that are required of them to work cooperatively with each other as a means to achieve high cohesion with each other. High team cohesion is a significant proponent of effective HATs (McNeese, Schelble, Canonico, & Demir, 2021) as it encompasses the comfort humans and AI have with each other and expresses it through fluid decision-making where each action mutually benefits the other as they learn the task and learn about each other's capabilities (Mallick, Sawant, McNeese, & Chalil Madathil, 2022). In this way, post-trained humans and AI understand each other's distinct responsibilities as it relates to the overall task and what aid would look like should one entity need it.…”
Section: Introductionmentioning
confidence: 99%