2016
DOI: 10.1080/13506285.2016.1274810
|View full text |Cite
|
Sign up to set email alerts
|

Disentangling boundary extension and normalization of view memory for scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 28 publications
0
11
0
Order By: Relevance
“…The fact that we find no evidence of BE with our abstract objects in Experiment 2 may also be due to such a normalization process. However, at this point, it is not possible for us to determine with these results whether the lack of BE is due to an anchoring effect as described above or a normalization effect reported by McDunn et al (2016). Future research is necessary to understand possible mechanisms eliminating BE.…”
Section: Discussionmentioning
confidence: 69%
See 1 more Smart Citation
“…The fact that we find no evidence of BE with our abstract objects in Experiment 2 may also be due to such a normalization process. However, at this point, it is not possible for us to determine with these results whether the lack of BE is due to an anchoring effect as described above or a normalization effect reported by McDunn et al (2016). Future research is necessary to understand possible mechanisms eliminating BE.…”
Section: Discussionmentioning
confidence: 69%
“…Gottesman and Intraub (2002) showed that photographs of objects on blank backgrounds elicited BE whereas line drawings of the same objects did not. They suggested that real photographs as opposed to line drawings are more likely to induce a partial image of a continuous scene (for a critical review of this study, see McDunn, Brown, Hale, & Siddiqui, 2016). In her unpublished thesis, Gottesman (1998) created nonorganized scenes by cutting a scene into six pieces and jumbling it up; she found BE even for these jumbled scenes.…”
mentioning
confidence: 99%
“…The visual trace of the image is integrated with this extended scene schema during encoding, leading to an extended representation of the image at retrieval. Interestingly, however, the amount of boundary extension evoked by a scene depends on its depicted scale-boundary extension is stronger for close-scale views, and is reduced or even absent as the view increases in distance (Bertamini et al, 2005;Intraub et al, 1992;McDunn et al, 2016). Given that viewing distance is a continuous variable with nearly infinite range, this raises an intriguing question: how does memory for an environment change across an extensive range of distances?…”
Section: Introductionmentioning
confidence: 99%
“…The BE errorpattern for close-to-far views, described previously, occurred in an immediate test; whereas a two-day-delayed test elicited bi-directional errors (including contraction for distant views) [4]. Reducing background distinctiveness (same objects) also shifted the error pattern from BE to bi-directional (contraction of more distant views), as did increasing the distance between closer and farther scenes in an intermixed set (creating a greater 'pull' to the average) [9].…”
Section: Correspondencementioning
confidence: 62%
“…Factors that overload spatial memory increase reliance on a different memory mechanism, normalization (memory averaging) that readily overrides BE. It elicits bidirectional errors (extension/ contraction) as views 'regress' toward the prototype [4,9]. The BE errorpattern for close-to-far views, described previously, occurred in an immediate test; whereas a two-day-delayed test elicited bi-directional errors (including contraction for distant views) [4].…”
Section: Correspondencementioning
confidence: 71%