Proceedings of the 17th International Conference on Informatics in Control, Automation and Robotics 2020
DOI: 10.5220/0009888405130520
|View full text |Cite
|
Sign up to set email alerts
|

An Integrated Object Detection and Tracking Framework for Mobile Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Alternatively, Cao et al [166] proposed an Open-Pose system for human skeleton pose estimation from RGB images. In another work, Juel et al [169] presented a multiobject tracking system that can be adapted to work with any detector and utilize streams from multiple cameras. They implemented a procedure of projecting RGB-D-based detections to the robot's base frame that are later transformed to the global frame using a localization algorithm.…”
Section: Human Detection and Trackingmentioning
confidence: 99%
“…Alternatively, Cao et al [166] proposed an Open-Pose system for human skeleton pose estimation from RGB images. In another work, Juel et al [169] presented a multiobject tracking system that can be adapted to work with any detector and utilize streams from multiple cameras. They implemented a procedure of projecting RGB-D-based detections to the robot's base frame that are later transformed to the global frame using a localization algorithm.…”
Section: Human Detection and Trackingmentioning
confidence: 99%
“…The developed system consists of various vision modules to extract information from the environment and format the information so that it can be used by the robot to make decisions and perform actions. The core module is a multicamera multi-detector tracking-by-detection system (Juel et al, 2020), designed specifically for mobile robots. It takes the output from any number of RGB-D sensors and processes it using a set of detectors, for example, the human detector shown in Figure 6A.…”
Section: Visual Modules: Human Detection Body Pose Estimation Object and Gaze Detectionmentioning
confidence: 99%
“…Aside from the experiments above, real-world experiments were also conducted to test the performance of the algorithm on noisy inputs. The experiment setup consisted of a mobile robot [22] with a 3D human pose estimation capability [23] and 9 humans standing around the robot in different Fformations in a 70m 2 area (fig. 6).…”
Section: Real World Experiments On Robotmentioning
confidence: 99%