2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341605
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Reinforcement Learning of Targeted Grasping with Active Vision for Mobile Manipulators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…Furthermore, the authors in works [ 90 , 91 , 92 ] assumed prior knowledge and relied on a target object with a specified color to be retrieved. In contrast, Fujita et al [ 93 ] accepted the target object as an image instead of a segmentation module [ 91 , 92 ]. A deep RL system based on active vision has been used to retrieve in dense clutter.…”
Section: Critical Reviewmentioning
confidence: 99%
“…Furthermore, the authors in works [ 90 , 91 , 92 ] assumed prior knowledge and relied on a target object with a specified color to be retrieved. In contrast, Fujita et al [ 93 ] accepted the target object as an image instead of a segmentation module [ 91 , 92 ]. A deep RL system based on active vision has been used to retrieve in dense clutter.…”
Section: Critical Reviewmentioning
confidence: 99%
“…In the preceding deterministic MDP formulation, we aim at solving a goal-reaching RL problem (Kaelbling, 1993b;Sutton et al, 2011;Andrychowicz et al, 2017;Andreas et al, 2017;Pong et al, 2018;Ghosh et al, 2019;Eysenbach et al, 2020aEysenbach et al, , 2020bKadian et al, 2020;Fujita et al, 2020;Chebotar et al, 2021;Khazatsky et al, 2021) or a planning problem (Bertsekas & Tsitsiklis, 1996;Boutilier et al, 1999;Sutton et al, 1999;Boutilier et al, 2000;Rintanen & Hoffmann, 2001;LaValle, 2006;Russell & Norvig, 2009;Nasiriany et al, 2019). We say a Q-function is successful if its associated greedy policy (Sutton & Barto, 2018)…”
Section: Successful Q-functionsmentioning
confidence: 99%
“…Goal-Conditioned RL Goal-conditioned RL, the problem of learning a policy that reaches certain goal states, has been empirically studied in many prior works (Kaelbling, 1993b;Sutton et al, 2011;Andrychowicz et al, 2017;Fu et al, 2018;Pong et al, 2018;Ghosh et al, 2019;Eysenbach et al, 2020aEysenbach et al, , 2020bKadian et al, 2020;Fujita et al, 2020;Chebotar et al, 2021;Khazatsky et al, 2021). The goal-conditioned RL is closely related to the sparse reward setting in our framework, where the agent only receives terminal rewards at the terminal (goal) states.…”
Section: Reward Designmentioning
confidence: 99%
“…In recent years, the application of deep reinforcement learning [ 3 , 4 , 5 , 6 , 7 , 8 ] in the robot field [ 9 , 10 ] has deepened and has been widely used in grasping [ 11 , 12 ], assembly [ 13 ], path planning [ 14 , 15 ], and other fields [ 16 , 17 ]. A few scholars have used deep reinforcement learning to study the constant force-tracking process, showing the great potential for applying deep reinforcement learning to solving the issue of constant force-tracking.…”
Section: Introductionmentioning
confidence: 99%