2018
DOI: 10.1360/n112017-00211
|View full text |Cite
|
Sign up to set email alerts
|

Intelligence methods of multi-modal information fusion in human-computer interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 33 publications
(38 reference statements)
0
8
0
Order By: Relevance
“…In the multimodal fusion study, the fusion methods are mainly divided into the decision layer, the data and the model layer [1]. The research decision layer fusion, Vu et al [32] proposed two single-modality recognitions of speech and gestures through weighting criteria and optimal probability fusion method.…”
Section: B Multimodal Interactionmentioning
confidence: 99%
See 3 more Smart Citations
“…In the multimodal fusion study, the fusion methods are mainly divided into the decision layer, the data and the model layer [1]. The research decision layer fusion, Vu et al [32] proposed two single-modality recognitions of speech and gestures through weighting criteria and optimal probability fusion method.…”
Section: B Multimodal Interactionmentioning
confidence: 99%
“…For the system, it is 9 kinds of tasks to understand the intentions of users. We establish the behaviors set as follows: [1,9] n  , the n indicates 9 kinds of user's intentions.…”
Section: A Multimodal Navigational Interaction Overall Frameworkmentioning
confidence: 99%
See 2 more Smart Citations
“…The multimodal fusion strategy [27] is mainly divided into feature layer fusion and decision layer fusion. In terms of feature layer fusion, in 2010, Jiang et al [28] proposed a multimodal biometric recognition method based on the Laplacian subspace for low-order fusion of face and speech.…”
Section: Related Workmentioning
confidence: 99%