Cultural heritage sites are precious and fragile resources that hold significant historical, esthetic, and social values in our society. However, the increasing frequency and severity of natural and man-made disasters constantly strike the cultural heritage sites with significant damages. In this article, we focus on a cultural heritage damage assessment (CHDA) problem where the goal is to accurately locate the damaged area of a cultural heritage site using the imagery data posted on social media during a disaster event by exploring the collective strengths of both AI and human intelligence from crowdsourcing systems. Unlike other infrastructure-based solutions, social media platforms provide a more pervasive and scalable solution to acquire timely cultural heritage damage information during disaster events. Our work is motivated by the limitation of current AI solutions that fail to accurately model the complex cultural heritage damage due to the lack of essential human cultural knowledge to differentiate various damage types and identify the actual causes of the damage. Two critical technical challenges exist in solving our problem: 1) it is challenging to effectively detect the problematic cultural heritage damage estimation of AI in the absence of ground truth labels and 2) it is nontrivial to acquire accurate cultural background knowledge from the potentially unreliable crowd workers to effectively address the failure cases of AI. To address the above-mentioned challenges, we develop CollabLearn, an uncertainty-aware crowd-AI collaborative assessment system that explicitly explores the human intelligence from crowdsourcing systems to identify and fix AI failure cases and boost the damage assessment accuracy in CHDA applications. The evaluation results on real-world datasets show that CollabLearn consistently outperforms both the state-
Motivated by the state-of-the-art optical sensing and image processing technologies, remote urban sensing (RUS) has emerged as a powerful sensing paradigm to capture abundant visual information about the urban environment for intelligent city monitoring, planning, and management. In this article, we focus on a classification and super-resolution coupling (CSC) problem in RUS applications, where the goal is to explore the interdependence between two critical tasks (i.e., classification and super-resolution) to concurrently boost the performance of both the tasks. Two fundamental challenges exist in solving our problem: 1) it is challenging to obtain accurate classification results and generate high-quality reconstructed images without knowing either of them a priori and 2) the noise embedded in the image data could be amplified infinitely by the complex interdependence and coupling between the two tasks. To address these challenges, we develop SCLearn, a novel deep convolutional neural network architecture, to couple the classification task with the super-resolution task in an integrated learning framework to concurrently boost the performance of both the tasks. The evaluation results on a real-world RUS application over two different cities in Europe (Barcelona and Berlin) show that SCLearn consistently outperforms the state-of-the-art baselines by simultaneously achieving better land usage classification accuracy and higher reconstructed image quality under various application scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.