Salient objects attract human attention and usually stand out clearly from their surroundings. In contrast, camouflaged objects share similar colors or textures with the environment. In this case, salient objects are typically non-camouflaged, and camouflaged objects are usually not salient. Due to this inherent "contradictory" attribute, we introduce an uncertainty-aware learning pipeline to extensively explore the contradictory information of salient object detection (SOD) and camouflaged object detection (COD) via datalevel and task-wise contradiction modeling. We first exploit the "dataset correlation" of these two tasks and claim that the easy samples in the COD dataset can serve as hard samples for SOD to improve the robustness of the SOD model. Based on the assumption that these two models should lead to activation maps highlighting different regions of the same input image, we further introduce a "contrastive" module with a joint-task contrastive learning framework to explicitly model the contradictory attributes of these two tasks. Different from conventional intra-task contrastive learning for unsupervised representation learning, our "contrastive" module is designed to model the task-wise correlation, leading to cross-task representation learning. To better understand the two tasks from the perspective of uncertainty, we extensively investigate the uncertainty estimation techniques for modeling the main uncertainties of the two tasks, namely "task uncertainty" (for SOD) and "data uncertainty" (for COD), and aiming to effectively estimate the challenging regions for each task to achieve difficulty-aware learning. Experimental results on benchmark datasets demonstrate that our solution leads to both state-of-the-art performance and informative uncertainty estimation.
Most of the recent successful object detection methods have been based on convolutional neural networks (CNNs). From previous studies, we learned that many feature reuse methods improve the network performance, but they increase the number of parameters. DenseNet uses thin layers that have fewer channels to alleviate the increase in parameters. This motivated us to find other methods for solving the increase in model size problems introduced by feature reuse methods. In this work, we employ different feature reuse methods on fire units and mobile units. We solved the problem and constructed two novel neural networks, fire-FRD-CNN and mobile-FRD-CNN. We conducted experiments with the proposed neural networks on KITTI and PASCAL VOC datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.