2014 10th France-Japan/ 8th Europe-Asia Congress on Mecatronics (MECATRONICS2014- Tokyo) 2014
DOI: 10.1109/mecatronics.2014.7018569
|View full text |Cite
|
Sign up to set email alerts
|

Human following with a mobile robot based on combination of disparity and color images

Abstract: This paper presents a target tracking system for a mobile robot in both indoor and outdoor environments. A stereo camera, which is robust to sunlight and illumination changes, is used. Human regions in images are detected from 3D information. Using color information, a target region is discriminated from the detected human regions. Hue and saturation are chosen as features robust to illumination changes. Finally, a mobile robot is controlled based on the 3D information of the detected target region. The effect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Many early versions of human‐following robots are based on visual texture of the target [ 4 ] . Isobe et al [ 5 ] extracts hue and saturation information of each person detected to be compared with the target registered previously. This kind of method is less effective in case of out‐of‐plane rotation, because the visual texture will change significantly at such times.…”
Section: Related Workmentioning
confidence: 99%
“…Many early versions of human‐following robots are based on visual texture of the target [ 4 ] . Isobe et al [ 5 ] extracts hue and saturation information of each person detected to be compared with the target registered previously. This kind of method is less effective in case of out‐of‐plane rotation, because the visual texture will change significantly at such times.…”
Section: Related Workmentioning
confidence: 99%
“…As illustrated in Figure 6e, the presence of a person corresponds to a specific pattern in terms of shape, average distance, and the number of points in the 3D point-cloud. Usually, a template is designed based on the expected values of these attributes, which is then used for detection (Isobe et al, 2014;Satake et al, 2013). A family of person-following algorithms applies similar methodologies to laser range finder (LRF) and sonar data.…”
Section: Ground Scenariomentioning
confidence: 99%
“…To observe the guide (surroundings), the following sensors are most commonly used: vision systems, laser rangefinders, ultrasonic sensors, satellite navigation systems, and wireless radio technologies, such as WI-FI, Bluetooth, and Ultra-wideband (UWB) [12][13][14][15][16][17][18][19][20][21]. Both vision systems and laser rangefinders lead in terms of popularity because they additionally enable the observation of the platform's surroundings (including the guide) [12][13][14][15][16][17][18][19][20].…”
Section: Introductionmentioning
confidence: 99%
“…To observe the guide (surroundings), the following sensors are most commonly used: vision systems, laser rangefinders, ultrasonic sensors, satellite navigation systems, and wireless radio technologies, such as WI-FI, Bluetooth, and Ultra-wideband (UWB) [12][13][14][15][16][17][18][19][20][21]. Both vision systems and laser rangefinders lead in terms of popularity because they additionally enable the observation of the platform's surroundings (including the guide) [12][13][14][15][16][17][18][19][20]. However, in the context of working in an outdoor environment, vision systems are highly exposed to unfavorable lighting conditions (uneven lighting and low visibility at night), while for laser rangefinders, the main problem is the presence of transparent (e.g., glass) or dark surfaces and certain weather conditions, such as rain.…”
Section: Introductionmentioning
confidence: 99%