2020
DOI: 10.1016/j.simpa.2020.100022
|View full text |Cite
|
Sign up to set email alerts
|

dm_control: Software and tasks for continuous control

Abstract: The dm_control software package is a collection of Python libraries and task suites for reinforcement learning agents in an articulated-body simulation. Infrastructure includes a wrapper for the MuJoCo physics engine and libraries for procedural model manipulation and task authoring. Task suites include the Control Suite, a set of standardized tasks intended to serve as performance benchmarks, a locomotion framework and task families, and a set of manipulation tasks with a robot arm and snap-together bricks. A… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 104 publications
(52 citation statements)
references
References 11 publications
0
52
0
Order By: Relevance
“…The simulated environments include Meta-World (Yu et al, 2020) introduced to benchmark meta-reinforcement learning and multi-task learning, Sokoban (Racanière et al, 2017) proposed as a planning problem, BabyAI (Chevalier-Boisvert et al, 2018) for language instruction following in grid-worlds, the DM Control Suite (Tunyasuvunakool et al, 2020) for continuous control, as well as DM Lab (Beattie et al, 2016) designed to teach agents navigation and 3D vision from raw pixels with an egocentric viewpoint. We also use the Arcade Learning Environment (Bellemare et al, 2013) with classic Atari games (we use two sets of games that we call ALE Atari and ALE Atari Extended, see Section F.1 for details).…”
Section: Simulated Control Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…The simulated environments include Meta-World (Yu et al, 2020) introduced to benchmark meta-reinforcement learning and multi-task learning, Sokoban (Racanière et al, 2017) proposed as a planning problem, BabyAI (Chevalier-Boisvert et al, 2018) for language instruction following in grid-worlds, the DM Control Suite (Tunyasuvunakool et al, 2020) for continuous control, as well as DM Lab (Beattie et al, 2016) designed to teach agents navigation and 3D vision from raw pixels with an egocentric viewpoint. We also use the Arcade Learning Environment (Bellemare et al, 2013) with classic Atari games (we use two sets of games that we call ALE Atari and ALE Atari Extended, see Section F.1 for details).…”
Section: Simulated Control Tasksmentioning
confidence: 99%
“…The DeepMind Control Suite Tunyasuvunakool et al, 2020) is a set of physicsbased simulation environments. For each task in the control suite we collect two disjoint sets of data, one using only state features and another using only pixels.…”
Section: F Data Collection Detailsmentioning
confidence: 99%
“…In contrast, RL methods are routinely used for a variety of other problems, such as robotics (Tunyasuvunakool et al, 2020) autonomous driving (Sallab et al, 2017), and smart building energy management (Yu et al, 2021). The main attraction of BSD is the principled nature of the solution.…”
Section: Discussionmentioning
confidence: 99%
“…Environments can be based on simulations. For example, popular RL applications with simulation-based environments include Atari video-games (Mnih et al, 2013), robotic tasks (Tunyasuvunakool et al, 2020) and autonomous driving (Sallab et al, 2017;Wurman et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Thanks to the recent progress of physics simulation (Todorov et al 2012;Coumans and Bai 2016;Erez et al 2015), it has drawn increasing interest to build full-physics robot simulation environment (Urakami et al 2019;Tunyasuvunakool et al 2020;Zhu et al 2020;James et al 2020;Mu et al 2021). Compared to robot simulation with abstract action (Kolve et al 2017;Savva et al 2019;, full-physic robot simulation supports low-level policy learning that could be transferred to real world.…”
Section: Robot Simulationmentioning
confidence: 99%