2022
DOI: 10.1016/j.birob.2022.100047
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical policy with deep-reinforcement learning for nonprehensile multiobject rearrangement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…However, multidisciplinary methods have been demonstrated to enhance knowledge gains while fostering the growth of interest, problem‐solving skills, and creative thinking. In order to merge engineering, science, and computational thinking, the Designing Biomimetic Robots action plan places a strong emphasis on problem‐based learning (CT) (Bai et al., 2022). Students study the surrounding ecosystems for 3–4 weeks to discover how animals perform various tasks, then they create a robot using what they learned as enthusiasm.…”
Section: Biomimicry‐based Strategiesmentioning
confidence: 99%
See 2 more Smart Citations
“…However, multidisciplinary methods have been demonstrated to enhance knowledge gains while fostering the growth of interest, problem‐solving skills, and creative thinking. In order to merge engineering, science, and computational thinking, the Designing Biomimetic Robots action plan places a strong emphasis on problem‐based learning (CT) (Bai et al., 2022). Students study the surrounding ecosystems for 3–4 weeks to discover how animals perform various tasks, then they create a robot using what they learned as enthusiasm.…”
Section: Biomimicry‐based Strategiesmentioning
confidence: 99%
“…It will also be necessary to enhance the manipulation tool to increase the versatility and adaptability of the rearrangement process. Exhibit 10 compares the effectiveness of several success rate evaluation methods (Bai et al, 2022).…”
Section: Biomimicry‐based Strategiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Model Predictive Control (MPC) combined with RL helps robots adjust their actions to deal with unexpected changes in the environment, like in legged locomotion [176,177]. Exploration methods, like Monte Carlo Tree Search (MCTS) and Proximal Policy Optimization (PPO), help robots make good decisions even in complex situations [178][179][180][181][182][183]. Hierarchical RL makes decision-making more efficient by breaking it down into smaller steps.…”
Section: Elevating Decision-making Processesmentioning
confidence: 99%