2020
DOI: 10.3389/frai.2020.00040
|View full text |Cite
|
Sign up to set email alerts
|

Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?

Abstract: In the Thematic Apperception Test, a picture story exercise (TAT/PSE; Heckhausen, 1963), it is assumed that unconscious motives can be detected in the text someone is telling about pictures shown in the test. Therefore, this text is classified by trained experts regarding evaluation rules. We tried to automate this coding and used a recurrent neuronal network (RNN) because of the sequential input data. There are two different cell types to improve recurrent neural networks regarding long-term dependencies in s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
58
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 121 publications
(58 citation statements)
references
References 8 publications
0
58
0
Order By: Relevance
“…Thus GRU is a slightly more streamlined variant that often offers comparable performance and is significantly faster to compute [ 18 ]. Although GRUs have been shown to exhibit better performance on certain smaller and less frequent datasets [ 18 , 34 ], both variants of RNN have proven their effectiveness while producing the outcome.
Fig.
…”
Section: Deep Learning Techniques and Applicationsmentioning
confidence: 99%
“…Thus GRU is a slightly more streamlined variant that often offers comparable performance and is significantly faster to compute [ 18 ]. Although GRUs have been shown to exhibit better performance on certain smaller and less frequent datasets [ 18 , 34 ], both variants of RNN have proven their effectiveness while producing the outcome.
Fig.
…”
Section: Deep Learning Techniques and Applicationsmentioning
confidence: 99%
“…The main difference between LSTM and GRU cells is their number of gates and internal states, where LSTMs are more complex (two internal states and three gates) than GRUs (one internal state and two gates). While in some cases GRUs outperform LSTMs, there is no clear rule of when to use one or the other (Yazidi et al, 2020). Each RNN contains a FNN layer with a single node at its end, which is used to compute the predicted values from the hidden states of the last time step (h T ).…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
“…Other research has noted that GRU cells may be better for specificity, or finding true negatives, and focusing on less prevalent content, whereas LSTM cells are better for detecting true positives and focusing on highly prevalent content. (Gruber and Jockisch, 2020). Looking into the dev dataset, only 6% of the observations contain no toxic spans, but this model is not predicting whether an entire comment contains any toxic spans, it is predicting if each word is toxic.…”
Section: Results and Analysismentioning
confidence: 98%