Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.178
|View full text |Cite
|
Sign up to set email alerts
|

Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models

Abstract: We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release HIPPOCORPUS, a dataset of 7,000 stories about imagined and recalled events.We introduce a measure of narrative flow and use this to examine the narratives for imagined and recalled events. Additionally, we measure the differential recruitment of knowledge attributed to semantic memory versus episodic memory (Tulving, 1972)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
33
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(34 citation statements)
references
References 39 publications
1
33
0
Order By: Relevance
“…We also observed that the temporal order recognition correlated with segmentation performance, so that with fine segmentation, the temporal memory was disturbed (Spearman ranked r = −0.33, p = 0.008; mixed-effect model F(1,57) = 12.59, p < 0.001; Figure 2 B; part 1: r = −0.27, p = 0.12; part 2: r = −0.31; p = 0.10). There was no presumed number of events in the movies of this experiment to subsequently count the number of recalled events; instead, we used natural language processing measurements for counting the recall length and number of factual “realis” events ( Sap et al., 2020 ; Sims et al., 2019 ). We did not observe any false memories or intrusions in the movie summaries, and we observed a relationship between event segmentation and number of realis events (Spearman r = 0.38, p = 0.004; mixed-effect F(1,51) = 0.46, p = 0.49; part 1: r = 0.37, p = 0.06; part 2: r = −0.09, p = 0.64; Figure 2 C) but not the recall length overall (r = 0.068, p = 0.62; part 1: r = 0.34, p = 0.08; part 2: r = −0.15, p = 0.43).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We also observed that the temporal order recognition correlated with segmentation performance, so that with fine segmentation, the temporal memory was disturbed (Spearman ranked r = −0.33, p = 0.008; mixed-effect model F(1,57) = 12.59, p < 0.001; Figure 2 B; part 1: r = −0.27, p = 0.12; part 2: r = −0.31; p = 0.10). There was no presumed number of events in the movies of this experiment to subsequently count the number of recalled events; instead, we used natural language processing measurements for counting the recall length and number of factual “realis” events ( Sap et al., 2020 ; Sims et al., 2019 ). We did not observe any false memories or intrusions in the movie summaries, and we observed a relationship between event segmentation and number of realis events (Spearman r = 0.38, p = 0.004; mixed-effect F(1,51) = 0.46, p = 0.49; part 1: r = 0.37, p = 0.06; part 2: r = −0.09, p = 0.64; Figure 2 C) but not the recall length overall (r = 0.068, p = 0.62; part 1: r = 0.34, p = 0.08; part 2: r = −0.15, p = 0.43).…”
Section: Resultsmentioning
confidence: 99%
“…We also studied a link between event segmentation and the subsequent memory performance (recall and temporal order memory). We accessed recall by applying natural language processing algorithms to count the number of “realis” events (i.e., factual and non-hypothetical words) and the total number of written words in the free recall task ( Sap et al., 2020 ; Sims et al., 2019 ).…”
Section: Methodsmentioning
confidence: 99%
“…We did not observe any false memories or intrusions. We further assessed the free recall performance with a natural language processing algorithm that determines the number of “realis” words used for factual events (Sap et al, 2020; Sims et al, 2019). Although this method does not include the recalled details of the movies, we observed a trend toward greater recall of factual events by fine-segmenters than coarse segmenters (two-sample t(8) = −1.69, p = 0.06; see the supplemental results for the main effect of movie type and the number of recalled words).…”
Section: Resultsmentioning
confidence: 99%
“…We predicted that temporal order recognition was due to the storyline if the effect was driven by the accuracy of remembering the order of far scenes. We then used natural language processing algorithms to count the number of “realis” events (i.e., factual and non-hypothetical words) and the total number of written words in the free recall task to determine which group recalled more events about the movies (Sap et al, 2020; Sims et al, 2019).…”
Section: Analysis Detailsmentioning
confidence: 99%
“…Some researchers are already using pre-trained transformers to measure the probability of the next event in a story, for instance, producing an estimate of "narrative flow." 29 Many new strategies will have a principled mathematical foundation. But the results they produce may not always signify what the math implies they should signify.…”
Section: Discussionmentioning
confidence: 99%