Underwater Seascapes 2014
DOI: 10.1007/978-3-319-03440-9_14
|View full text |Cite
|
Sign up to set email alerts
|

Underwater Multimodal Survey: Merging Optical and Acoustic Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…In contrast, the BV5000 MSS is based on a multi-beam echosounder placed on a pan-tilt system and produces 3D point clouds, as described and illustrated in [11,12]. The BV5000 was one of the first devices of this type to appear on the commercial market, at the beginning of the decade.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, the BV5000 MSS is based on a multi-beam echosounder placed on a pan-tilt system and produces 3D point clouds, as described and illustrated in [11,12]. The BV5000 was one of the first devices of this type to appear on the commercial market, at the beginning of the decade.…”
Section: Related Workmentioning
confidence: 99%
“…Data from an Inertial Measurement Unit (IMU) were integrated to estimate the position and orientation of the DWM inside the cave system. Drap et al (2014) fused data from a static acoustic camera and a photogrammetric system with three synchronized digital cameras to produce a multi-resolution dense 3D model of an underwater cave off the coast of Marseilles. A pure imaged-based approach was proposed by Weidner 2017, who employed a stereo camera with an illumination source.…”
Section: Underwater Cavesmentioning
confidence: 99%
“…Such a method is difficult to adopt in the opto-acoustic alignment, because a point-to-point correspondence between 3D points in both representations is not a simple task, i.e. , the low resolution and the strong noise component of acoustic data do not allow for precisely localizing a point position as determined for the optical camera, as demonstrated in [ 2 , 20 ]. Therefore, it is necessary to find other features fit for establishing the correspondence between optical and acoustic frames.…”
Section: Relative Orientation Of the Opto-acoustic 3d Cameramentioning
confidence: 99%
“…Since methods that rely on explicit opto-acoustic correspondences have to be avoided [ 2 , 20 ], in our approach the acoustic 3D point clouds representing the orientation rig are matched to the optical counterpart by using the Iterative Closest Point (ICP) algorithm, an iterative least-square technique used for the registration of rigid 3D shapes. This approach eliminates the need to perform any feature extraction or to specify any explicit feature correspondence.…”
Section: Relative Orientation Of the Opto-acoustic 3d Cameramentioning
confidence: 99%
See 1 more Smart Citation