2008
DOI: 10.1007/s10514-008-9094-7
|View full text |Cite
|
Sign up to set email alerts
|

Visual-model-based, real-time 3D pose tracking for autonomous navigation: methodology and experiments

Abstract: This paper presents a novel 3D-model-based computer-vision method for tracking the full six degreeof-freedom (dof) pose (position and orientation) of a rigid body, in real-time. The methodology has been targeted for autonomous navigation tasks, such as interception of or rendezvous with mobile targets. Tracking an object's complete six-dof pose makes the proposed algorithm useful even when targets are not restricted to planar motion (e.g., flying or rough-terrain navigation). Tracking is achieved via a combina… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 43 publications
0
4
0
Order By: Relevance
“…In the context of formation control the problem of a limited sensor perception space is critical along with collision avoidance and trajectory tracking. One approach for vision sensors is visual servoing based on optical flow or features, as shown in de Ruiter and Benhabib (2008), Kendoul et al (2009) and Erkent and Işıl Bozma (2012). For traditional backstepping controllers, sensor perception limits are addressed by switching the robot's formation control according to the compliance with the sensor constraints (Wang et al 2015).…”
Section: Related Workmentioning
confidence: 99%
“…In the context of formation control the problem of a limited sensor perception space is critical along with collision avoidance and trajectory tracking. One approach for vision sensors is visual servoing based on optical flow or features, as shown in de Ruiter and Benhabib (2008), Kendoul et al (2009) and Erkent and Işıl Bozma (2012). For traditional backstepping controllers, sensor perception limits are addressed by switching the robot's formation control according to the compliance with the sensor constraints (Wang et al 2015).…”
Section: Related Workmentioning
confidence: 99%
“…The approach described by [23] uses a modified version of the Active Appearance Model which allows for partial and self occlusion of the objects and for high accuracy and precision. Minimize the optical flow resulting from the projection of a textured model and the camera image [31]. To compensate for shadows and changing lighting, they apply an illumination normalisation technique.…”
Section: Related Workmentioning
confidence: 99%
“…Iterative particle filtering As proposed in previous works by [24,31], iterative particle filtering increases responsiveness to rapid pose changes. Therefore, steps 2 and 3 of Algorithm 1 are performed several times on the same image.…”
Section: Monte Carlo Particle Filtering (Mcpf)mentioning
confidence: 99%
“…In an underwater environment, it is not possible to use laser sensors (Yang et al 2011) and there are also many difficulties in using vision sensors (de Ruiter and Benhabib 2008;Stelzer et al 2012) or the global positioning system (Sahawneh et al 2011;Liang and Jia 2015;Chiang et al 2011;Leung et al 2011;Chiang and Huang 2008) because of lighting conditions and signal interference. Therefore, ultra-short baseline (USBL), Doppler velocity log, and sonar have been widely applied in navigation of underwater vehicles (Lee and Jun 2007;Li et al 2014Li et al , 2015Allotta et al 2014Allotta et al , 2015Allotta et al , 2016Morgado et al 2011;He et al 2015).…”
Section: Introductionmentioning
confidence: 99%