Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. 2004
DOI: 10.1109/icpr.2004.1333924
|View full text |Cite
|
Sign up to set email alerts
|

Understanding inexplicit utterances using vision for helper robots

Abstract: Speech interfaces should have a capability of dealing with inexplicit utterances including such as ellipsis and deixis since they are common phenomena in our daily conversation. Their resolution using context and a priori knowledge has been investigated in the fields of natural language and speech understanding. However, there are utterances that cannot be understood by such symbol processing alone. In this paper, we consider inexplicit utterances caused from the fact that humans have vision. If we are certain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2008
2008
2014
2014

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…Some robotic agents shift human attention in several ways including gaze turn, [28] head orientation, [29] reference term, [30] pointing gestures, [31] and body pose. [32] Most of these assumed that a human faces to the robot when their interaction begins.…”
Section: Related Workmentioning
confidence: 99%
“…Some robotic agents shift human attention in several ways including gaze turn, [28] head orientation, [29] reference term, [30] pointing gestures, [31] and body pose. [32] Most of these assumed that a human faces to the robot when their interaction begins.…”
Section: Related Workmentioning
confidence: 99%
“…various recognition technologies, such as speech recognition, pointing gesture recognition, and position detection of objects [18]- [19]. There are two approaches to improving the performance of these recognition technologies: the engineering approach and the entrainment approach.…”
Section: Introductionmentioning
confidence: 99%
“…Some previous studies on joint attention used several social cues: for example, gaze [18][19], head and gaze [20], reference term and pointing [21][22]. Most of these assumed that the human faces to the robot when their interaction starts.…”
Section: Introductionmentioning
confidence: 99%