2012 IEEE/RSJ International Conference on Intelligent Robots and Systems 2012
DOI: 10.1109/iros.2012.6385620
|View full text |Cite
|
Sign up to set email alerts
|

Automatic calibration of a stationary network of laser range finders by matching movement trajectories

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
1

Year Published

2013
2013
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 23 publications
0
7
0
1
Order By: Relevance
“…Another approach consists of matching the trajectories of dynamic objects (or people) in the scene (Glas et al, 2010;Schenk et al, 2012). For that, the trajectory of one, or several objects, is tracked independently by each LRF, and these trajectories are registered to constraint the sensor's relative poses.…”
Section: Related Workmentioning
confidence: 99%
“…Another approach consists of matching the trajectories of dynamic objects (or people) in the scene (Glas et al, 2010;Schenk et al, 2012). For that, the trajectory of one, or several objects, is tracked independently by each LRF, and these trajectories are registered to constraint the sensor's relative poses.…”
Section: Related Workmentioning
confidence: 99%
“…For this, we transform each person hypothesis into coordinates of a global map. The global map is generated by calibrating each camera using their intrinsic parameters and multiple laser range finders [15].…”
Section: Merging Image-based Hypotheses Into a Global Metatrack Reprementioning
confidence: 99%
“…It is generated in an offline phase by clustering trajectories, obtained by camera [11] and laser-range-finder based people tracking [14,15]. The mean transition time and variance between neighboring nodes is stored in their connecting edge.…”
Section: Predictionmentioning
confidence: 99%
“…Blanco et al [11] applied the odometer data and multiple LRFs data to construct the multiple 3D point cloud of the same scene, and then completed the calibration between LRFs by 3D point cloud matching. Glas [12] and Schenk [13] et al estimated the relative positions between LRFs by matching the trajectories of dynamic objects in the scene. Although these methods do not require any specially designed markers, the calibration error between the LRFs and the intermediate sensors may be accumulated in the calculation of the extrinsic parameters between the LRFs when they do not have enough common view.…”
Section: Introductionmentioning
confidence: 99%