2018
DOI: 10.31234/osf.io/qvr9j
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How Optimal is Word-Referent Identification Under Multimodal Uncertainty?

Abstract: Identifying a spoken word in a referential context requires both the ability to integrate multimodal input and the ability to reason under uncertainty. How do these tasks interact with one another? We study how adults identify novel words under joint uncertainty in the auditory and visual modalities and we propose an ideal observer model of how cues in these modalities are combined optimally. Model predictions are tested in four experiments where recognition is made under various sources of uncertainty. We fou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2
1

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 65 publications
0
3
0
Order By: Relevance
“…Consider a speaker who asks you to "Pass the salt" in a noisy restaurant where it is difficult to perceive what she is saying. Recent theoretical and empirical work suggests that children and adults can overcome a "noisy signal" by integrating what they perceive with their prior beliefs about what the speaker was likely to have said (Fourtassi & Frank, 2017;Gibson, Bergen, & Piantadosi, 2013;Yurovsky, Case, & Frank, 2017). In the current work, we pursue this idea and ask whether listeners strategically gather visual information to integrate with the linguistic signal and facilitate comprehension.…”
mentioning
confidence: 99%
“…Consider a speaker who asks you to "Pass the salt" in a noisy restaurant where it is difficult to perceive what she is saying. Recent theoretical and empirical work suggests that children and adults can overcome a "noisy signal" by integrating what they perceive with their prior beliefs about what the speaker was likely to have said (Fourtassi & Frank, 2017;Gibson, Bergen, & Piantadosi, 2013;Yurovsky, Case, & Frank, 2017). In the current work, we pursue this idea and ask whether listeners strategically gather visual information to integrate with the linguistic signal and facilitate comprehension.…”
mentioning
confidence: 99%
“…For example, in prior work, we found that children and adults fixated longer on a speaker's face when processing familiar words in a "noisy" auditory environment, suggesting that they compensated for uncertainty in the auditory signal by gathering more visual information (MacDonald, Marchman, Fernald, & Frank, 2018). Moreover, recent theoretical and empirical work suggests that children and adults handle noise in the signal by integrating what they perceive with their prior beliefs about the speaker's intended meaning (Fourtassi & Frank, 2017;Gibson, Bergen, & Piantadosi, 2013;Yurovsky, Case, & Frank, 2017).…”
Section: Introductionmentioning
confidence: 76%
“…Recent theoretical and empirical work suggests that children and adults handle this sort of noise in the signal by integrating what they perceive with their prior beliefs about the speaker's intended meaning (Fourtassi & Frank, 2017;Gibson, Bergen, & Piantadosi, 2013;Yurovsky, Case, & Frank, 2017).…”
Section: Introductionmentioning
confidence: 99%