Proceedings of the Working Conference on Advanced Visual Interfaces 2008
DOI: 10.1145/1385569.1385625
|View full text |Cite
|
Sign up to set email alerts
|

Ambiguity detection in multimodal systems

Abstract: Multimodal systems support users to communicate in a natural way according to their needs. However, the naturalness of the interaction implies that it is hard to find one and only one interpretation of the user's input. Consequently the necessity to define methods for users' input interpretation and ambiguity detection is arising. This paper proposes a theoretical approach based on a Constraint Multiset Grammar combined with Linear Logic, for representing and detecting ambiguities, and in particular semantic a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…As defined in [102], each terminal element E i is identified by a set of meaningful features, as follows: E i mod corresponds to the modality (e.g., speech, facial expression, gesture) used to create the element E i ; E i repr indicates how the element E i is represented by the modality; E i time measures the time interval (based on the start and end time values) over which the element E i was created; E i role corresponds to the syntactic role that the element E i plays in the multimodal sentence, according to the Penn Treebank Tag set [103] (e.g., noun, verb, adjective, adverb, pronoun, preposition, etc. ); and E i concept gives the semantic meaning of the element considering the conceptual structure of the context [104]. Given two elements E i and E j , where E j has a close-by relationship with E j [7], E i coop is set to the same value as E j and specifies the type of cooperation [7] between the elements E i and E j .…”
Section: Representationmentioning
confidence: 99%
See 1 more Smart Citation
“…As defined in [102], each terminal element E i is identified by a set of meaningful features, as follows: E i mod corresponds to the modality (e.g., speech, facial expression, gesture) used to create the element E i ; E i repr indicates how the element E i is represented by the modality; E i time measures the time interval (based on the start and end time values) over which the element E i was created; E i role corresponds to the syntactic role that the element E i plays in the multimodal sentence, according to the Penn Treebank Tag set [103] (e.g., noun, verb, adjective, adverb, pronoun, preposition, etc. ); and E i concept gives the semantic meaning of the element considering the conceptual structure of the context [104]. Given two elements E i and E j , where E j has a close-by relationship with E j [7], E i coop is set to the same value as E j and specifies the type of cooperation [7] between the elements E i and E j .…”
Section: Representationmentioning
confidence: 99%
“…Interact. 2022, 6, x FOR PEER REVIEW semantic meaning of the element considering the conceptual structure of the [104]. Given two elements E i and E j , where E j has a close-by relationship with E j [7] set to the same value as E j and specifies the type of cooperation [7] between the el E i and E j .…”
Section: Representationmentioning
confidence: 99%
“…The introduction of a classificatory step before the ambiguity solution allows adopting a systematic and modular approach. We start from the idea that an incorrect (i.e., ambiguous) interpretation implies the identification of the meaningful features to be managed for solving the ambiguity [22]. This paper goes beyond the static classification process proposed in [10] and provides a dynamic approach modeling knowledge about multimodal ambiguities.…”
Section: Problem Statementmentioning
confidence: 99%
“…This paper discusses the classification step proposing a new classification that extends and reformulates the ambiguity classifications presented for Natural Language (NL) [15] and Visual Languages (VLs) [16] and evolves previous work on multimodal ambiguities [17].…”
Section: Figure 1 Steps Of the Multimodal Interaction Managementmentioning
confidence: 99%