2016
DOI: 10.1016/j.chb.2015.12.039
|View full text |Cite
|
Sign up to set email alerts
|

Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
59
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 91 publications
(61 citation statements)
references
References 52 publications
1
59
0
1
Order By: Relevance
“…There is also work in sociable robots that are able to communicate and interact with users, understand and relate to them in social or human terms, and learn and adapt throughout their lifetimes [9]. Recent work has shown that people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as a human body interface (agent's responses vocalized by a human speech shadower), compared to when the agent's responses were shown as a text screen [14]. Then there is work in embodied conversational agents that take on a physical form with the intent of eliciting more natural communication with users.…”
Section: Related Workmentioning
confidence: 99%
“…There is also work in sociable robots that are able to communicate and interact with users, understand and relate to them in social or human terms, and learn and adapt throughout their lifetimes [9]. Recent work has shown that people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as a human body interface (agent's responses vocalized by a human speech shadower), compared to when the agent's responses were shown as a text screen [14]. Then there is work in embodied conversational agents that take on a physical form with the intent of eliciting more natural communication with users.…”
Section: Related Workmentioning
confidence: 99%
“…Somewhat contrary to the above studies on the benefits of transparency, Murgia, Janssens, Demeyer, and Vasilescu (2016) found that, when deploying a bot that answered users' questions in the Stack Overflow community, the bot was regarded more positively when posing as a human than when revealing its bot identity. In a similar vein, Corti and Gillespie (2016) found users more likely to expend effort in making themselves understood when the agent's chat content was conveyed through a human (the so-called echoborg method) than through a text-based interface. Hence, transparency in the machine identity and capabilities of a chatbot may work counter to its intention.…”
Section: Transparency In the Interactionmentioning
confidence: 91%
“…Open domain CAs often have chitchat purposes and lack boundaries of a specified task or context information of a closed knowledge field, e.g. [8,16,27].…”
Section: Domain (Do)mentioning
confidence: 99%
“…[11,15,20] and such that are non goal-oriented (CGN), e.g. [8,41,45], whose value lies in the interaction itself.…”
Section: Ca Representation (Re)mentioning
confidence: 99%