2022
DOI: 10.1016/j.eswa.2022.116626
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modality helps in crisis management: An attention-based deep learning approach of leveraging text for image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…An efficient DL algorithm was presented in [21] to manipulate multimodal information sources (words and photos) and disseminate helpful information during natural disasters. The programme divided the tweets into seven crucial and actionable groups, including reports of "hurt or dead individuals" and "infrastructure damage".…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…An efficient DL algorithm was presented in [21] to manipulate multimodal information sources (words and photos) and disseminate helpful information during natural disasters. The programme divided the tweets into seven crucial and actionable groups, including reports of "hurt or dead individuals" and "infrastructure damage".…”
Section: Related Workmentioning
confidence: 99%
“…Research by [32][33][34][35][36][37] similarly highlighted the multimodal model utilised in various textual, image and video datasets for different domains. Likewise, [15,17,[20][21][22] employed multimodality to develop DL-based models in facilitating disaster management and recovery. Table I depicts the comparison of different classification algorithms and their effects on each model.…”
Section: Fig 3 the Experiments Frameworkmentioning
confidence: 99%