2020
DOI: 10.1007/978-981-15-3325-9_20
|View full text |Cite
|
Sign up to set email alerts
|

Emoticon Prediction on Textual Data Using Stacked LSTM Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 7 publications
0
1
0
Order By: Relevance
“…For example, the paper cites a study which used an SVM (Support Vector Machine) model to achieve an accuracy of 82.3% and another which used a BLSTM-CNN model to achieve an accuracy of 87.8%. The author of this paper [12] describes a method for predicting emoticons on textual data using a stacked Long Short-Term Memory (LSTM) model. The stacked LSTM model is composed of multiple LSTM layers, which are used to process the text and generate an output corresponding to a particular emoticon.…”
Section: Related Workmentioning
confidence: 99%
“…For example, the paper cites a study which used an SVM (Support Vector Machine) model to achieve an accuracy of 82.3% and another which used a BLSTM-CNN model to achieve an accuracy of 87.8%. The author of this paper [12] describes a method for predicting emoticons on textual data using a stacked Long Short-Term Memory (LSTM) model. The stacked LSTM model is composed of multiple LSTM layers, which are used to process the text and generate an output corresponding to a particular emoticon.…”
Section: Related Workmentioning
confidence: 99%
“…The principle behind the LSTM architecture is a memory cell which can sustain its state over time [40], and non-linear gating units which manage the information flux into and out of the cell. A deep network of convolutional LSTM can be used to access the full spectrum [41] of temporal information at spatial scales of the data.…”
Section: Detailed Architecturementioning
confidence: 99%