Ship detection of synthetic aperture radar (SAR) images is one of the research hotspots in the field of marine surveillance. Fusing salient features to detection network can effectively improve the precision of ship detection. However, how to effectively fuse the salient features of SAR images is still a difficult task. In this paper, to improve the ship detection precision, we design a novel one-stage ship detection network to fuse salient features and deep convolutional neural network (CNN) features. Firstly, a saliency map extraction algorithm is proposed. The algorithm is applied to generate saliency map by using multi-scale pyramid features and frequency domain features. Secondly, the backbone of the ship detection network contains a two-stream network. The upper-stream network uses the original SAR image as input to extract multi-scale deep CNN features. The lower-stream network uses the corresponding saliency map as input to acquire multi-scale salient features. Thirdly, for integrating the salient features to deep CNN features, a novel salient feature fusion method is designed. Finally, an improved bi-directional feature pyramid network is applied to the ship detection network for reducing the computational complexity and network parameters. The proposed methods are evaluated on the public ship detection dataset and the experimental results shows that it can make a significant improvement in the precision of SAR image ship detection. INDEX TERMS Ship detection, synthetic aperture radar images, feature fusion, saliency map, deep convolutional neural network.
Computationally, the aesthetic quality of an image means that the model automatically scores the aesthetic level of the image. However, there are many factors that determine beauty or ugliness for photographic photos. Therefore, extracting a variety of representative aesthetic features and fusing these features are still difficult tasks. In this paper, we design a two-stream network to calculate the aesthetic quality of the image. The upper stream of the network is an improved network with the SEResNet-50 and six skip connections added, which can improve the performance of the model without training to obtain deep convolutional neural network features. The lower stream of the network consists of the proposed algorithms for handcrafted extracting aesthetic features and multiple convolution layers to extract the aesthetic features. Finally, to fuse the features of the two-stream network without adding feature dimensions, a novel feature fusion layer is proposed. The results show that this novel feature fusion method can calculate results close to the artificial aesthetic evaluation. INDEX TERMS Deep convolutional neural networks, feature fusion, handcrafted aesthetic features, image aesthetics quality assessment.
Speckle noise can reduce the image quality of synthetic aperture radar (SAR) and make interpretation more difficult. Existing SAR image despeckling convolutional neural networks require quantities of noisy-clean image pairs. However, obtaining clean SAR images is very difficult. Because continuous convolution and pooling operations result in losing many informational details while extracting the deep features of the SAR image, the quality of recovered clean images becomes worse. Therefore, we propose a despeckling network called multiscale dilated residual U-Net (MDRU-Net). The MDRU-Net can be trained directly using noisy-noisy image pairs without clean data. To protect more SAR image details, we design five multiscale dilated convolution modules that extract and fuse multiscale features. Considering that the deep and shallow features are very distinct in fusion, we design different dilation residual skip connections, which make features at the same level have the same convolution operations. Afterward, we present an effective L_ hybrid loss function that can effectively improve the network stability and suppress artifacts in the predicted clean SAR image. Compared with the state-of-the-art despeckling algorithms, the proposed MDRU-Net achieves a significant improvement in several key metrics. © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
A new model is presented to resolve cycle slips detection for triple-frequency observations of BeiDou navigation satellite system (BDS) in this paper when pseudorange observations are missing or insufficiently accurate under harsh or special situations. Based on the first-order time-difference geometryfree (GF) pseudorange-phase combination model, the new cycle slips detection and correction method based on the triple-frequency carrier phase and Doppler observations is proposed. With analyses on the two common sampling intervals (30 s and 1 s), it can be concluded that the optimal combination coefficients of the proposed model relate to sampling intervals. Combinations [4,-2,-3], [-1,-5,6], and [-3,6,-2] are selected to detect and correct cycle slips for 30 s sampling interval, while combinations [0,-1,1], [1,0,-1], and [-3,2,2] are selected for 1 s sampling interval. The validity of the phase-Doppler combination model under the static condition and the steady ionosphere with 30 s sampling interval and 1 s sampling interval is verified by two static experiments. Results show that the phase-Doppler combination model can achieve the same performance as the pseudorange-phase combination model. All the small, insensitive, and large cycle slips added to the three types of BDS satellites which separately belong to Geostationary Earth Orbit (GEO), Inclined Geosynchronous Orbit (IGSO), and Medium Earth Orbit (MEO) are detected and corrected successfully by the proposed model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.