Depth maps are acquirable and irreplaceable geometric information that significantly enhances traditional color images. RGB and Depth (RGBD) images have been widely used in various image analysis applications, but they are still very limited due to challenges from different modalities and misalignment between color and depth. In this paper, a Fully Aligned Fusion Network (FAFNet) for RGBD semantic segmentation is presented. To improve cross-modality fusion, a new RGBD fusion block is proposed, features from color images and depth maps are first fused by an attention cross fusion module and then aligned by a semantic flow. A multi-layer structure is also designed to hierarchically utilize the RGBD fusion block, which not only eases issues of low resolution and noises for depth maps but also reduces the loss of semantic features in the upsampling process. Quantitative and qualitative evaluations on both the NYU-Depth V2 and the SUN RGB-D dataset demonstrate that the FAFNet model outperforms state-of-the-art RGBD semantic segmentation methods.
In a single-observer passive localization system, the velocity and position of the target are estimated simultaneously. However, this can lead to correlated errors and distortion of the estimated value, making independent estimation of the speed and position necessary. In this study, we introduce a novel optimization strategy, suboptimal estimation, for independently estimating the velocity vector in single-observer passive localization. The suboptimal estimation strategy converts the estimation of the velocity vector into a search for the global optimal solution by dynamically weighting multiple optimization criteria from the starting point in the solution space. Simulation verification is conducted using uniform motion and constant acceleration models. The results demonstrate that the proposed method converges faster with higher accuracy and strong robustness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.