2017
DOI: 10.1016/j.concog.2017.09.004
|View full text |Cite
|
Sign up to set email alerts
|

The interpretation of dream meaning: Resolving ambiguity using Latent Semantic Analysis in a small corpus of text

Abstract: A B S T R A C TComputer-based dreams content analysis relies on word frequencies within predefined categories in order to identify different elements in text. As a complementary approach, we explored the capabilities and limitations of word-embedding techniques to identify word usage patterns among dream reports. These tools allow us to quantify words associations in text and to identify the meaning of target words. Word-embeddings have been extensively studied in large datasets, but only a few studies analyze… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0
4

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 61 publications
(36 citation statements)
references
References 45 publications
0
32
0
4
Order By: Relevance
“…We discarded infrequent tokens, with less than 5 repetitions and very frequent tokens, with a frequency higher than 10 −3 . We set the window size and negative sampling to 15 (which were found to be maximal in two semantic tasks over TASA corpus (Altszyler et al, 2017)). Word2vec semantic representations were generated with the Gensim Python library (Rehuek and Sojka, 2010).…”
Section: Word2vec Representation (50 Features)mentioning
confidence: 99%
“…We discarded infrequent tokens, with less than 5 repetitions and very frequent tokens, with a frequency higher than 10 −3 . We set the window size and negative sampling to 15 (which were found to be maximal in two semantic tasks over TASA corpus (Altszyler et al, 2017)). Word2vec semantic representations were generated with the Gensim Python library (Rehuek and Sojka, 2010).…”
Section: Word2vec Representation (50 Features)mentioning
confidence: 99%
“…In LSA implementation, a Log-Entropy transformation was applied before the truncated Singular Value Decomposition. In Skip-gram implementation, we discarded tokens with frequency higher than 10 −3 , and we set the window size and negative sampling parameters to 15 (which were found to be maximal in two semantic tasks over TASA corpus (Altszyler et al, 2017)). In all cases, word-embeddings dimensions values were varied to study its dependency.…”
Section: Methodsmentioning
confidence: 99%
“…A recent use of LDA and word2vec include detection of fake news on Twitter (Helmstetter & Paulheim, 2018). For other examples of uses of the LSA and LDA algorithms in a psychological context, we refer the reader to Chen and Wojcik (2016) and Altszyler, Ribeiro, Sigman, and Slezak (2017).…”
Section: Textual Datamentioning
confidence: 99%