Some stimuli are more memorable than others. Humans have demonstrated partial access to the properties that make a given stimulus more or less memorable. Recently, a deep neural network named ResMem was shown to successfully decode the memorability of visual stimuli as well. However, it remains unknown whether ResMem's predictions of memorability reflect the influence of stimulus-intrinsic properties or other stimulus-extrinsic factors that are known to induce interindividual consistency in memory performance (e.g., interstimulus similarity). Additionally, it is not clear whether ResMem and humans share access to overlapping properties of memorability. Here, in three experiments, we show that ResMem predicts stimulus-intrinsic memorability independent of stimulus-extrinsic factors, and that it captures aspects of memorability that are inaccessible to human observers. Taken together, our results confirm the multifaceted nature of memorability and establish a method for isolating its aspects that are largely inaccessible to humans.
Public Significance StatementSome images are easier to remember than others, making some images more memorable than others consistently across human observers. Previous research has shown that both humans and a pretrained neural network called ResMem can predict image memorability. However, whether humans and ResMem rely on the same aspects of the images in predicting their memorability remains unclear. Our study first demonstrated that ResMem predicted the memorability of images without relying on their interitem similarity among other images. More crucially, we found that humans and ResMem used largely nonoverlapping aspects of images for memorability prediction, suggesting that ResMem can be used to elucidate the aspects of memorability that are not explicitly accessible to humans.