2004
DOI: 10.1207/s15327590ijhc1701_4
|View full text |Cite
|
Sign up to set email alerts
|

InterActor: Speech-Driven Embodied Interactive Actor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2005
2005
2019
2019

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 98 publications
(13 citation statements)
references
References 7 publications
0
13
0
Order By: Relevance
“…A listener's interaction model includes a nodding reaction model, which estimates the timing involved during nodding from a speech ON-OFF pattern and is linked to a bodily reaction model (Watanabe et al, 2004). The timing of nodding is predicted using a hierarchy model consisting of two stages we label as macro and micro (Fig.…”
Section: Interaction Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…A listener's interaction model includes a nodding reaction model, which estimates the timing involved during nodding from a speech ON-OFF pattern and is linked to a bodily reaction model (Watanabe et al, 2004). The timing of nodding is predicted using a hierarchy model consisting of two stages we label as macro and micro (Fig.…”
Section: Interaction Modelmentioning
confidence: 99%
“…In addition, studies of entrainment using robot systems have been conducted (Itai and Miwa, 2007). In our previous research, we analyzed the entrainment between a speaker's speech and listener's nodding movements during face-to-face communication and developed InterRobot Technology (iRT), which generates a variety of communicative actions and movements such as nodding, blinking, and movements of the head, arms, and waist that relate to speech input (Watanabe et al, 2004). In addition, we developed an interactive CG character called "InterActor," which has functions of both speaker and listener, and demonstrated that InterActor can effectively support human interaction and communication.…”
Section: Introductionmentioning
confidence: 99%
“…Fig.8 shows the example of the virtual space in which the car across the road and unknown people are walking the street randomly. On the other hand, for encouraging communication with the partner, speech driven avatar named InterActor will be applied (Watanabe et al, 2004). Moreover, there is a limitation in a diversity of computer graphics.…”
Section: Future Workmentioning
confidence: 99%
“…The time series of body movements on a second scale is usually much less stable or uniform in the real world than in a laboratory setting [11]. That is, in daily situation, it would be difficult to evaluate the quality of people's communication and to support the communication from the viewpoint of second-scale coevolution like many previous studies [13]- [16], [22]- [29], [37]- [39]. Therefore, if the over-second-scale coevolution between people can be seen in daily situation, the evaluation and support of people's communication would be developed.…”
Section: Introductionmentioning
confidence: 97%
“…Riek et al developed a robot that can mimic facial gestures of people [37]. Watanabe et al constructed a system to entrain the body movements including speech between people and robots/characters [38], [39]. These studies suggest that the interpersonal coevolution can be applied to support communication between people and between people and robots/characters.…”
Section: Introductionmentioning
confidence: 99%