2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016
DOI: 10.1109/iros.2016.7759716
|View full text |Cite
|
Sign up to set email alerts
|

RL-IAC: An exploration policy for online saliency learning on an autonomous mobile robot

Abstract: In the context of visual object search and localization, saliency maps provide an efficient way to find object candidates in images. Unlike most approaches, we propose a way to learn saliency maps directly on a robot, by exploring the environment, discovering salient objects using geometric cues, and learning their visual aspects. More importantly, we provide an autonomous exploration strategy able to drive the robot for the task of learning saliency. For that, we describe the Reinforcement Learning-Intelligen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 19 publications
1
9
0
Order By: Relevance
“…This article extends Craye et al [19] by providing more technical details and experimental results. It also presents a new type of feature extractor based on convolutional neural networks, and a method to automatically and incrementally obtain a navigation graph required by the RL-IAC algorithm.The CNN feature extraction approach was already presented in a previous work [18], but the evaluation was carried out on a different dataset, and the use of RL-IAC was not investigated.…”
Section: Introductionsupporting
confidence: 60%
See 1 more Smart Citation
“…This article extends Craye et al [19] by providing more technical details and experimental results. It also presents a new type of feature extractor based on convolutional neural networks, and a method to automatically and incrementally obtain a navigation graph required by the RL-IAC algorithm.The CNN feature extraction approach was already presented in a previous work [18], but the evaluation was carried out on a different dataset, and the use of RL-IAC was not investigated.…”
Section: Introductionsupporting
confidence: 60%
“…In this article, we consider a mobile robot exploring its environment while building a model of visual saliency enhancing objects of interest. Based upon our previous work [19], we present a system ( Fig. 1) able to:…”
Section: Introductionmentioning
confidence: 99%
“…We instead use an object detector based on the depth-map to this end. This depth-based object detection has been described in previous publications that one could refer to for more details [14], [15]. Apart from this, the saliency learning process is exactly the same.…”
Section: A Experimental Setupmentioning
confidence: 98%
“…We therefore apply the superpixels approach presented in Section IV-D to increase the resolution of these maps. To evaluate our feature extractor versus the one proposed in previous work [15], we generate saliency maps from both the CNN-based feature extractor (denoted as ISL here), and the former feature extractor (denoted as ISLMake3D) that was used in [15].…”
Section: B Incremental Saliency Learningmentioning
confidence: 99%
See 1 more Smart Citation