1996
DOI: 10.3758/bf03201085
|View full text |Cite
|
Sign up to set email alerts
|

Picture-word differences in a sentence verification task

Abstract: Effects of picture-word format were investigated with four problem-solving items. In Experiment I, picture-word input was presented for 8 sec followed by a test sentence that included verbatim and inference statements. Subjects made a true/false reaction time to the test sentence. In Experiment 2, the input remained on the screen while the test sentence was presented with varied stimulus onset asynchronies from 0 to 1,000msec. Results showed that responses to pictures were faster than responses to words, and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2002
2002
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Numerous studies indicate that semantic analysis is performed faster for pictures than for text and that graphical information is more easily and efficiently remembered than textual information [19], [43]. These studies suggest that graphical representations of software are inherently useful, though particular representations may not be.…”
Section: Need For Experimentationmentioning
confidence: 99%
“…Numerous studies indicate that semantic analysis is performed faster for pictures than for text and that graphical information is more easily and efficiently remembered than textual information [19], [43]. These studies suggest that graphical representations of software are inherently useful, though particular representations may not be.…”
Section: Need For Experimentationmentioning
confidence: 99%
“…In addition, a few failures to find the typical congruence effect (longer response latencies for a picture-sentence mismatch than match) with serial picture-sentence presentation have led to concerns about the generality of the paradigm. Despite its occasional use to study language comprehension (e.g., Goolkasian, 1996; Reichle, Carpenter, and Just, 2000; Singer, 2006; Underwood et al, 2004), then, the fact remains that insights obtained with this paradigm have had minimal impact on psycholinguistic theories of online sentence comprehension or of situated sentence comprehension.…”
Section: Introductionmentioning
confidence: 99%
“…Inspired by the word/picture sentence verification task from psycholinguistics (Goolkasian, 1996), we further propose various novel evaluation settings by representing the scene bimodally as both an image and a caption. First, TRAVLR supports the novel cross-modal transfer setting (Figure 1): If pretrained V+L models have learnt a truly multimodal representation, they should be able to learn a reasoning task with input from one modality and perform inference using input from the other modality with little to no extra training.…”
Section: Introductionmentioning
confidence: 99%