We address the issue of visual saliency from three perspectives. First, we consider saliency detection as a frequency domain analysis problem. Second, we achieve this by employing the concept of nonsaliency. Third, we simultaneously consider the detection of salient regions of different size. The paper proposes a new bottom-up paradigm for detecting visual saliency, characterized by a scale-space analysis of the amplitude spectrum of natural images. We show that the convolution of the image amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale is equivalent to an image saliency detector. The saliency map is obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. A Hypercomplex Fourier Transform performs the analysis in the frequency domain. Using available databases, we demonstrate experimentally that the proposed model can predict human fixation data. We also introduce a new image database and use it to show that the saliency detector can highlight both small and large salient regions, as well as inhibit repeated distractors in cluttered images. In addition, we show that it is able to predict salient regions on which people focus their attention.
We propose a new saliency detection model by combining global information from frequency domain analysis and local information from spatial domain analysis. In the frequency domain analysis, instead of modeling salient regions, we model the nonsalient regions using global information; these so-called repeating patterns that are not distinctive in the scene are suppressed by using spectrum smoothing. In spatial domain analysis, we enhance those regions that are more informative by using a center-surround mechanism similar to that found in the visual cortex. Finally, the outputs from these two channels are combined to produce the saliency map. We demonstrate that the proposed model has the ability to highlight both small and large salient regions in cluttered scenes and to inhibit repeating objects. Experimental results also show that the proposed model outperforms existing algorithms in predicting objects regions where human pay more attention.
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.
Road detection is a crucial problem for autonomous navigation system (ANS) and advance driverassistance system (ADAS). In this paper, we propose a hierarchical road detection method for robust road detection in challenging scenarios. Given an on-board road image, we first train a Gaussian mixture model (GMM) to obtain road probability density map (RPDM), and next oversegment the image into superpixels. Based on RPDM and superpixels, initial seeds are selected in an unsupervised way, and the seed superpixels iteratively try to occupy their neighbors according to GrowCut framework, the road segment is obtained after convergency. Finally, we refine the road segment with a conditional random field (CRF), which enforces the shape prior on the road segmentation task. Experiments on two challenging databases demonstrate that the proposed method exhibits high robustness compared with the state-of-the-art.
Negative obstacles for field autonomous land vehicles (ALVs) refer to ditches, pits, or terrain with a negative slope, which will bring risks to vehicles in travel. This paper presents a feature fusion based algorithm (FFA) for negative obstacle detection with LiDAR sensors. The main contributions of this paper are fourfold: (1) A novel three‐dimensional (3‐D) LiDAR setup is presented. With this setup, the blind area around the vehicle is greatly reduced, and the density of LiDAR data is greatly improved, which are critical for ALVs. (2) On the basis of the proposed setup, a mathematical model of the point distribution of a single scan line is deduced, which is used to generate ideal scan lines. (3) With the mathematical model, an adaptive matching filter based algorithm (AMFA) is presented to implement negative obstacle detection. Features of simulated obstacles in each scan line are employed to detect the real negative obstacles. They are supposed to match with features of the potential real obstacles. (4) Grounded on AMFA algorithm, a feature fusion based algorithm is proposed. FFA algorithm fuses all the features generated by different LiDARs or captured at different frames. Bayesian rule is adopted to estimate the weight of each feature. Experimental results show that the performance of the proposed algorithm is robust and stable. Compared with the state‐of‐the‐art techniques, the detection range is improved by 20%, and the computing time is reduced by an order of two magnitudes. The proposed algorithm had been successfully applied on two ALVs, which won the champion and the runner‐up in the “Overcome Danger 2014” ground unmanned vehicle challenge of China.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.