This paper describes a new framework for segmentation of sonar images, tracking of underwater objects and motion estimation. This framework is applied to the design of an obstacle avoidance and path planning system for underwater vehicles based on a multi-beam forward looking sonar sensor. The real-time data flow (acoustic images) at the input of the system is first segmented and relevant features are extracted. We also take advantage of the real-time data stream to track the obstacles in following frames to obtain their dynamic characteristics. This allows us to optimize the preprocessing phases in segmenting only the relevant part of the images. Once the static (size and shape) as well as dynamic characteristics (velocity, acceleration, …) of the obstacles have been computed, we create a representation of the vehicle's workspace based on these features. This representation uses constructive solid geometry (CSG) to create a convex set of obstacles defining the workspace. The tracking takes also into account obstacles which are no longer in the field of view of the sonar in the path planning phase. A well-proven nonlinear search (sequential quadratic programming) is then employed, where obstacles are expressed as constraints in the search space. This approach is less affected by local minima than classical methods using potential fields. The proposed system is not only capable of obstacle avoidance but also of path planning in complex environments which include fast moving obstacles. Results obtained on real sonar data are shown and discussed. Possible applications to sonar servoing and real-time motion estimation are also discussed.
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
We study the problem of periodicity detection in massive data sets of photometric or radial velocity time series, as presented by ESA's Gaia mission. Periodicity detection hinges on the estimation of the false alarm probability (FAP) of the extremum of the periodogram of the time series. We consider the problem of its estimation with two main issues in mind. First, for a given number of observations and signal-to-noise ratio, the rate of correct periodicity detections should be constant for all realized cadences of observations regardless of the observational time patterns, in order to avoid sky biases that are difficult to assess. Second, the computational loads should be kept feasible even for millions of time series. Using the Gaia case, we compare the F M method (Paltani 2004; Schwarzenberg-Czerny 2012), the Baluev method (Baluev 2008) and the GEV method (Süveges 2014), as well as a method for the direct estimation of a threshold. Three methods involve some unknown parameters, which are obtained by fitting a regression-type predictive model using easily obtainable covariates derived from observational time series. We conclude that the GEV and the Baluev methods both provide good solutions to the issues posed by a large-scale processing. The first of these yields the best scientific quality at the price of some moderately costly pre-processing. When this pre-processing is impossible for some reason (e.g. the computational costs are prohibitive or good regression models cannot be constructed), the Baluev method provides a computationally inexpensive alternative with slight biases in regions where time samplings exhibit strong aliases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.