2002
DOI: 10.1145/568513.568514
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal human discourse

Abstract: Gesture and speech combine to form a rich basis for human conversational interaction. To exploit these modalities in HCI, we need to understand the interplay between them and the way in which they support communication. We propose a framework for the gesture research done to date, and present our work on the cross-modal cues for discourse segmentation in free-form gesticulation accompanying speech in natural conversation as a new paradigm for such multimodal interaction. The basis for this integration is the p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
115
0
10

Year Published

2006
2006
2014
2014

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 230 publications
(125 citation statements)
references
References 28 publications
0
115
0
10
Order By: Relevance
“…Additionally, each gesture was made once at a rapid pace and once at a slow pace. Gestures number 2,3,7,8,10,12,13,14,15,17,18,19 are periodical and in their case the Table 1. The gesture list prepared with the proposed methodology.…”
Section: The Proposed Methodologymentioning
confidence: 98%
See 1 more Smart Citation
“…Additionally, each gesture was made once at a rapid pace and once at a slow pace. Gestures number 2,3,7,8,10,12,13,14,15,17,18,19 are periodical and in their case the Table 1. The gesture list prepared with the proposed methodology.…”
Section: The Proposed Methodologymentioning
confidence: 98%
“…The gesture list prepared with the proposed methodology. Notes: a-We use the terms 'symbolic', 'deictic', and 'iconic' based on McNeill & Levy [8] classification, supplemented with a category of 'manipulative' gestures (following [10]), b-Significant motion components: T-hand translation, R-hand rotation, F-individual finger movement, c-This gesture is usually accompanied with a specific object (deictic) reference.…”
Section: The Proposed Methodologymentioning
confidence: 99%
“…In HCI, various modalities are studied including speeches and gestures. Quek et al studied multimodal human discourse in aspect of gesture and speech [12]. Christoudias et al proposed cotraining method of multimodal data to construct multimodal interface [13].…”
Section: Related Workmentioning
confidence: 99%
“…According to (12), hyperedges with unique information get higher weights by definition. Also, hyperedges with low weight values are eliminated and the erased amounts of hyperedges are regenerated from training set.…”
Section: Learning Of the First-layer Hypernetworkmentioning
confidence: 99%
“…Vector Coherence Mapping (VCM) is a video motion tracking algorithm first introduced by Quek et al [1,7,8], and has been applied to a wide variety of gesture analysis [9,10,12]. VCM is an inherently parallel algorithm that tracks sets of interest points as a spatially and temporally coherent vector fields from raw video.…”
Section: Vector Coherence Mappingmentioning
confidence: 99%