Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/682
|View full text |Cite
|
Sign up to set email alerts
|

Virtual-to-Real: Learning to Control in Visual Semantic Segmentation

Abstract: Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform. Unfortunately, the reality gap between synthetic and real visual data prohibits direct migration of the models trained in virtual worlds to the real world. This paper proposes a modular architecture for tackling the virtual-to-real problem. The proposed architecture separates the learning model into… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(35 citation statements)
references
References 11 publications
0
32
0
Order By: Relevance
“…Here, the navigation direction is controlled by a fuzzy logic that receives a segmented image from the RGB input. Another work has shown that a semantic segmentation model trained in a virtual environment can minimize the gap between the real and virtual environment [36]. In this study, the segmentation model plays an essential role in visually guiding a robot where to go, and an RL agent trained in the simulation can be transferred to the real environment to control a car.…”
Section: Robot Navigation Using Segmentation Mapmentioning
confidence: 95%
“…Here, the navigation direction is controlled by a fuzzy logic that receives a segmented image from the RGB input. Another work has shown that a semantic segmentation model trained in a virtual environment can minimize the gap between the real and virtual environment [36]. In this study, the segmentation model plays an essential role in visually guiding a robot where to go, and an RL agent trained in the simulation can be transferred to the real environment to control a car.…”
Section: Robot Navigation Using Segmentation Mapmentioning
confidence: 95%
“…Synthetic data has been harnessed as a training source to empower the training of the data-hungry deep network policies. Accordingly, several works have introduced new techniques to close the reality gap, allowing these policies to generalize from simulation to the real world [14,27,40]. In this work, we also use simulated data to train our models.…”
Section: Related Workmentioning
confidence: 99%
“…Prior works that addressed generalization of navigation policy [14,27] have typically focused on simulation-to-real transfer for low-level motion control tasks. We instead evaluate our approach in visual navigation tasks with higher complexity in the Gibson simulated environments [37], which were shown to transfer to the real world without further supervision [24,17].…”
Section: Introductionmentioning
confidence: 99%
“…It is also shown that domain randomisation is less efficient than a combination involving domain-adaptation methods [22]. DA methods, such as splitting the model into a perceptual and control module and then retraining the perceptual module for new environments, have been proposed in [23], [24] to improve transfer learning, however the drawbacks include expensive retraining and that the representation connecting the two modules limits the information available to the control module.…”
Section: Introductionmentioning
confidence: 99%