Acknowledgments:The authors would like to thank Aude Oliva and Wilma Bainbridge for help in the conception of this project, and Alex Clarke for helpful comments on the manuscript.
AbstractIt is generally assumed that the encoding of a single event generates multiple memory representations, which contribute differently to subsequent episodic memory. We used fMRI and representational similarity analysis (RSA) to examine how visual and semantic representations predicted subsequent memory for single item encoding (e.g., seeing an orange). Three levels of visual representations corresponding to early, middle, and late visual processing stages were based on a deep neural network. Three levels of semantic representations were based on normative Observed ("is round"), Taxonomic ("is a fruit"), and Encyclopedic features ("is sweet"). We identified brain regions where each representation type predicted later Perceptual Memory, Conceptual Memory, or both (General Memory). Participants encoded objects during fMRI, and then completed both a word-based conceptual and picture-based perceptual memory test. Visual representations predicted subsequent Perceptual Memory in visual cortices, but also facilitated Conceptual and General Memory in more anterior regions. Semantic representations, in turn, predicted Perceptual Memory in visual cortex, Conceptual Memory in the perirhinal and inferior prefrontal cortex, and General Memory in the angular gyrus. These results suggest that the contribution of visual and semantic representations to subsequent memory effects depends on a complex interaction between representation, test type, and storage location.