Word Health Organization declared coronavirus disease 2019 (COVID-19) as pandemic on 11 M arch 2020. After that on 14 M arch 2020 the M inistry of Home Affairs, India decided to treat COVID-19 as a "notified disaster" due to the spurt in the cases related to coronavirus in the country, leading to a complete shut down from 24 M arch 2020. This has affected all sectors of the country including the education sector. The near-total closure of schools, colleges, and universities has disrupted academic activities at various levels. The objective of this online survey study is to understand the day to day living, activities, learning styles, and mental health of young students of India during this unprecedented crisis and assess how they are adapting to the new e-learning styles and how they are managing their social lives.
During the past decade, social media platforms have been extensively used during a disaster for information dissemination by the affected community and humanitarian agencies. Although many studies have been done recently to classify the informative and non-informative messages from social media posts, most are unimodal, i.e., have independently used textual or visual data to build the deep learning models. In the present study, we integrate the complementary information provided by the text and image messages about the same event posted by the affected community on the social media platform Twitter and build a multimodal deep learning model based on the concept of attention mechanism. The attention mechanism is a recent breakthrough that has revolutionized the field of deep learning. Just as humans pay more attention to a specific part of the text or image, ignoring the rest, neural networks can also be trained to concentrate on more relevant features through the attention mechanism. We propose a novel Cross-Attention Multi-Modal (CAMM) deep neural network for classifying multimodal disaster data, which uses the attention mask of the textual modality to highlight the features of the visual modality. We compare CAMM with unimodal models and the most popular bilinear multimodal models, MUTAN and BLOCK, generally used for visual question answering. CAMM achieves an average F1-score of 84.08%, better than the MUTAN and BLOCK methods by 6.31% and 5.91%, respectively. The proposed crossattention-based multimodal deep learning method outperforms the current state-of-the-art fusion methods on the benchmark multimodal disaster dataset by highlighting the more relevant cross-domain features of text and image tweets. This study affirms that social media platforms become a rich source of multimodal data during a disaster. This data can be utilized to build automated tools for quick filtration of informative messages to assess the post-disaster needs of the affected community and provide timely help.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.