Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing 2019
DOI: 10.1145/3297280.3297481
|View full text |Cite
|
Sign up to set email alerts
|

A computationally efficient multi-modal classification approach of disaster-related Twitter images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 19 publications
0
11
0
Order By: Relevance
“…A few recent works by Alam et al (2017) and Nguyen et al (2017b) used visual features only in finding informative images in case of disaster. Rizk et al (2019) and Mouzannar et al (2018) used both textual and visual features related to build-infrastructure damage, nature damage, and fire for estimating the damage due to disaster. To the best of our knowledge, no work has been reported where tweet text and images are used together to filter informative tweets from massive social media contents.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A few recent works by Alam et al (2017) and Nguyen et al (2017b) used visual features only in finding informative images in case of disaster. Rizk et al (2019) and Mouzannar et al (2018) used both textual and visual features related to build-infrastructure damage, nature damage, and fire for estimating the damage due to disaster. To the best of our knowledge, no work has been reported where tweet text and images are used together to filter informative tweets from massive social media contents.…”
Section: Introductionmentioning
confidence: 99%
“…They used Nepal Earthquake, Ecuador Earthquake, Hurricane Matthew, Typhoon Ruby, and Google Images datasets and trained event-specific as well as cross-event classifier. Their CNN model outperformed Bag-of-Visual-Words (BoVW) techniques and achieved the F1 scores in the range of 0.67 to 0.89.Recently, researchers have proposed multi-modal systems utilizing the tweet text and images both for finding relevant information from social media Rizk et al (2019). developed a multi-modal disaster-related classifier to classify Twitter data into the builtinfrastructure damage and nature damage classes.…”
mentioning
confidence: 99%
“…They used the Inception pre-trained model for visual feature extraction and designed a CNN architecture for textual features. Similarly, Rizk et al [35] proposed a multimodal architecture to classify the Twitter data into infrastructure and natural damage categories. Ofli et al [8] also presented a multimodal approach for classifying the tweets into two categories: informative task (e.g., informative vs. non-informative) and humanitarian task (e.g., affected individuals, rescue volunteering or donation effort, infrastructure and utility damage).…”
Section: B Multimodal Approachesmentioning
confidence: 99%
“…On the imagery content, they achieved an F1 score of 87.74% using XGboost [14]. The study in [56] propose a simple, computationally inexpensive, multi-modal two-stage framework to classify tweets (text and image) with built-infrastructure damage vs. nature-damage. The study investigated their approach using a home-grown dataset and the SUN dataset [71].…”
Section: Multimodality (Image and Text)mentioning
confidence: 99%