2021
DOI: 10.26650/acin.947747
|View full text |Cite
|
Sign up to set email alerts
|

LSTM Derin Öğrenme Yaklaşımı ile Covid-19 Pandemi Sürecinde Twitter Verilerinden Duygu Analizi

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
4

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 1 publication
0
5
0
4
Order By: Relevance
“…In the scope of KTS process [62], related video segments (i.e., video shots) needed for video summarization have been chosen based on change points that have been detected. This meta model was built with VGG19 [60] and Bi-LSTM [14][15][16] parts as told in the above sections of this study. After this kind of summarization training, the original full length video may come as an input when it is used to summarize with the use of shot level scores as average frames scores.…”
Section: Multimodal Deep Learning Methodologymentioning
confidence: 99%
See 4 more Smart Citations
“…In the scope of KTS process [62], related video segments (i.e., video shots) needed for video summarization have been chosen based on change points that have been detected. This meta model was built with VGG19 [60] and Bi-LSTM [14][15][16] parts as told in the above sections of this study. After this kind of summarization training, the original full length video may come as an input when it is used to summarize with the use of shot level scores as average frames scores.…”
Section: Multimodal Deep Learning Methodologymentioning
confidence: 99%
“…Shel et al [17], proposed a model which is one of the examples for multimodal-based video prediction using deep learning. The idea behind their work is to create CNN [4] features from the frames of the video and give them to generated Long Short Term Memory (LSTM) [14][15][16] layers and to create MFCC features from the audio tracks of these video frames and give them to a generated different LSTM layers. Therefore, multimodal feature learning is applied into these two outputs and the classification process is done at the end.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations