2018
DOI: 10.1002/rob.21783
|View full text |Cite
|
Sign up to set email alerts
|

Three‐dimensional reconstruction of underwater objects using wide‐aperture imaging SONAR

Abstract: The estimation of the geometric structure of objects located underwater underpins a plethora of applications such as mapping shipwrecks for archaeology, monitoring the health of coral reefs, detecting faults in offshore oil rigs and pipelines, detection and identification of potential threats on the seabed, etc. Acoustic imaging is the most popular choice for underwater sensing. Underwater exploratory vehicles typically employ wide-aperture Sound Navigation and Ranging (SONAR) imaging sensors. Although their w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
24
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(25 citation statements)
references
References 24 publications
0
24
0
Order By: Relevance
“…To identify a certain known object in the point cloud obtained from MVS, the work [18] suggests using a 3D model of this object, preprocessed and transformed into a point cloud set in the CS Оxyz. This cloud, consisting of m points 𝐾 𝑞 ∈ 𝑅 3 (𝑞 = 1, 𝑚), first, is virtually approximated to the cloud of points belonging to the actual object through the linear transfer in the direction of the vector 𝑝 ⃗ 1 ∈ 𝑅 3 , whose magnitude and direction are determined by the systems of detection of underwater objects that stand out from the surrounding [9,10]. Then the displaced cloud, defined by the points 𝑃 𝑞 = (𝐾 𝑞 + 𝑝 ⃗ 1 ), is aligned with the point cloud of the object using the iterative closest point (ICP) algorithm [29] implemented in the open-access Point Cloud Library (PCL) [30].…”
Section: Improving the Accuracy Of Determination Of Work Object's Sha...mentioning
confidence: 99%
See 1 more Smart Citation
“…To identify a certain known object in the point cloud obtained from MVS, the work [18] suggests using a 3D model of this object, preprocessed and transformed into a point cloud set in the CS Оxyz. This cloud, consisting of m points 𝐾 𝑞 ∈ 𝑅 3 (𝑞 = 1, 𝑚), first, is virtually approximated to the cloud of points belonging to the actual object through the linear transfer in the direction of the vector 𝑝 ⃗ 1 ∈ 𝑅 3 , whose magnitude and direction are determined by the systems of detection of underwater objects that stand out from the surrounding [9,10]. Then the displaced cloud, defined by the points 𝑃 𝑞 = (𝐾 𝑞 + 𝑝 ⃗ 1 ), is aligned with the point cloud of the object using the iterative closest point (ICP) algorithm [29] implemented in the open-access Point Cloud Library (PCL) [30].…”
Section: Improving the Accuracy Of Determination Of Work Object's Sha...mentioning
confidence: 99%
“…Their efficiency has been confirmed experimentally. Prior to the operations, the object is identified on the basis of data received from the onboard sonars [9,10], and then the UUV is sent to the object. Simultaneously, a site for stabilized hovering of the UUV is selected in such a way that the working zone of the manipulator allows correct execution of the required operations and avoidance of collisions of the vehicle with the bottom or object [8].…”
Section: Introductionmentioning
confidence: 99%
“…Forward‐looking SONARs (FLSs) are commonly employed in 2D mosaicing (Ferreira et al, 2015; Hurtós et al, 2015), automatic target recognition (ATR; Valdenegro‐Toro, 2019), 3D mapping and surface reconstruction (Aykin & Negahdaripour, 2013; Guerneve et al, 2018), and navigation‐aiding (Franchi et al, 2020; Westman & Kaess, 2019). In the context of navigation‐aiding, these devices have been often relegated, for the majority of times, to bound the navigation drift, working along with a standard navigation sensor set constituted, for example, of a Doppler velocity log (DVL) and an attitude heading reference system (AHRS).…”
Section: Introductionmentioning
confidence: 99%
“…1). Using polar coordinates, the intensity of a pixel on a sonar image can be represented by [3] where a single pixel is formed by the contribution of all the intensities from points in three-dimensional (3D) space (r, θ, φ) over the aperture [φ 1 , φ 2 ]. β(φ) is a function related to the beam pattern, V s (r, θ, φ) is a measure related to objects reflectivity and D s (r, θ, φ) = v• n rθφ v n rθφ , is the cosine of the angle between the direction of the beam v and the surface normal n at point (r, θ, φ).…”
Section: Introductionmentioning
confidence: 99%