An auto fabric defect detection system via computer vision is used to replace manual inspection. In this paper, we propose a hardware accelerated algorithm based on a smallscale over-completed dictionary (SSOCD) via sparse coding (SC) method, which is realized on a parallel hardware platform (TMS320C6678). In order to reduce computation, the image patches projections in the training SSOCD are taken as features and the proposed features are more robust, and exhibit obvious advantages in detection results and computational cost. Furthermore, we introduce detection ratio and false ratio in order to measure the performance and reliability of the hardware accelerated algorithm. The experiments show that the proposed algorithm can run with high parallel efficiency and that the detection speed meets the real-time requirements of industrial inspection.
Image deblurring is a challenging problem in image processing, which aims to reconstruct an original high-quality image from its blurred measurement caused by various factors, for example, imperfect focusing caused by the imaging system or different depths of scene appearing commonly in our daily photos. Recently, sparse representation whose basic idea is to code an image patch as a linear combination of a few atoms chosen out from an overcomplete dictionary has shown uplifting results in image deblurring. Based on this and another heart-stirring property called nonlocal self-similarity, some researchers have developed nonlocal sparse regularization models to unify the local sparsity and the nonlocal self-similarity into a variational framework for image deblurring. In such models, the similarity evaluation for searching similar image patches is indispensable and influential in deblurring performance. Though the traditional Euclidean distance is generally a choice as a similarity metric, its application might give rise to inferior performance since it fails to capture the intrinsic structure of image patches. Consequently, in this article, based on structural similarity index and principal component analysis, we propose the nonlocal sparse regularization-based image deblurring with novel similarity criteria called structural similarity distance and principal component analysis-subspace Euclidean distance to improve the accuracy of deblurring. The structural similarity index is commonly used for assessing perceptual image quality, and principal component analysis is pervasively used in pattern recognition and dimensionality reduction. In our comprehensive experiments, the nonlocal sparse regularization-based image deblurring with our novel similarity criteria has achieved higher peak signal-to-noise and favorable consistency with subjective vision perception compared with state-of-the-art deblurring algorithms.
3D object detection in LiDAR point clouds has been extensively used in autonomous driving, intelligent robotics, and augmented reality. Although the one-stage 3D detector has satisfactory training and inference speed, there are still some performance problems due to insufficient utilization of bird’s eye view (BEV) information. In this paper, a new backbone network is proposed to complete the cross-layer fusion of multi-scale BEV feature maps, which makes full use of various information for detection. Specifically, our proposed backbone network can be divided into a coarse branch and a fine branch. In the coarse branch, we use the pyramidal feature hierarchy (PFH) to generate multi-scale BEV feature maps, which retain the advantages of different levels and serves as the input of the fine branch. In the fine branch, our proposed pyramid splitting and aggregation (PSA) module deeply integrates different levels of multi-scale feature maps, thereby improving the expressive ability of the final features. Extensive experiments on the challenging KITTI-3D benchmark show that our method has better performance in both 3D and BEV object detection compared with some previous state-of-the-art methods. Experimental results with average precision (AP) prove the effectiveness of our network.
Automatic ship detection in optical remote sensing images is of great significance due to its broad applications in maritime security and fishery control. Most ship detection algorithms utilize a single-band image to design low-level and hand-crafted features, which are easily influenced by interference like clouds and strong waves and not robust for large-scale variation of ships. In this paper, we propose a novel coarse-to-fine ship detection method based on discrete wavelet transform (DWT) and a deep residual dense network (DRDN) to address these problems. First, multi-spectral images are adopted for sea-land segmentation, and an enhanced DWT is employed to quickly extract ship candidate regions with missing alarms as low as possible. Second, panchromatic images with clear spatial details are used for ship classification. Specifically, we propose the local residual dense block (LRDB) to fully extract semantic feature via local residual connection and densely connected convolutional layers. DRDN mainly consists of four LRDBs and is designed to further remove false alarms. Furthermore, we exploit the multiclass classification strategy, which can overcome the large intra-class difference of targets and identify ships of different sizes. Extensive experiments demonstrate that the proposed method has high robustness in complex image backgrounds and achieves higher detection accuracy than other state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.