2009 IEEE International Conference on Intelligent Computing and Intelligent Systems 2009
DOI: 10.1109/icicisys.2009.5358137
|View full text |Cite
|
Sign up to set email alerts
|

Making a clever intelligent agent: The theory behind the implementation

Abstract: The study of how humans establish mutual understanding is intertwined with the design of artificial conversation systems [1,2,3,4,5]. The focus of this paper is perspectivetaking in and artificial imitation of communication. Regardless of whether an engineer takes psychological theory into consideration when building an agent, an underlying philosophy of perspective-taking is evident when observing the agent's performance. Furthermore, theories of perspectivetaking offer designers an advantage in two ways: 1) … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…As our first author has mentioned previously [17], we suggest that a successful communicative agent needs to be more than intelligent. It must also be clever enough to use what we know about how humans communicate to its advantage.…”
Section: Synthesismentioning
confidence: 93%
See 1 more Smart Citation
“…As our first author has mentioned previously [17], we suggest that a successful communicative agent needs to be more than intelligent. It must also be clever enough to use what we know about how humans communicate to its advantage.…”
Section: Synthesismentioning
confidence: 93%
“…Chatbots that are designed exactly the same but set in different environments can have very different effects on their human interlocutors [17]. People tend to assume that they are understood by their listeners and that they are on the same page or following along with their speakers, they are fooled easily by chatbots that are set in constrained environments.…”
Section: Synthesismentioning
confidence: 99%
“…Chat bots demonstrate a poor capacity to reason about conversation, cannot consistently identify and repair misunderstandings, and generally talk at an entirely superficial level ( Perlis et al, 1998 ; Shahri and Perlis, 2008 ). According to Raine (2009) , many chat bots work “based on an assumption that the basic components of a communication are on a phrase-by-phrase basis and that the most immediate input will be the most relevant stimulus for the upcoming output” (p. 399), an operative model that can lead conversation to irreparably fall apart when the perspectives of parties to a conversation diverge in terms of the meaning or intention each party assigns to an utterance. Human communication is fundamentally temporal and sequential, with many past and possible future utterances feeding into the meaning of a given utterance ( Linell, 2009 ).…”
Section: Contemporary Android Sciencementioning
confidence: 99%