2021
DOI: 10.48550/arxiv.2104.02402
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

General Robot Dynamics Learning and Gen2Real

Dengpeng Xing,
Jiale Li,
Yiming Yang
et al.

Abstract: Acquiring dynamics is an essential topic in robot learning, but up-to-date methods, such as dynamics randomization, need to restart to check nominal parameters, generate simulation data, and train networks whenever they face different robots. To improve it, we novelly investigate general robot dynamics, its inverse models, and Gen2Real, which means transferring to reality. Our motivations are to build a model that learns the intrinsic dynamics of various robots and lower the threshold of dynamics learning by e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 22 publications
(24 reference statements)
0
3
0
Order By: Relevance
“…In Ref. [100], the authors proposed a pre-training decoder model for forward and inverse models of robots. The structure of the robot has various factors such as the length and number of links and torque values.…”
Section: Miscellaneous Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…In Ref. [100], the authors proposed a pre-training decoder model for forward and inverse models of robots. The structure of the robot has various factors such as the length and number of links and torque values.…”
Section: Miscellaneous Modelsmentioning
confidence: 99%
“…Decision transformer [96] Trajectory transformer [99] Miscellaneous models w/ pre-train lamBERT [89] Deeper DRL [91] CoBERL [93] Gen2Real [100] Pre-trained VLN Fig. 6.…”
Section: Sensor Fusionmentioning
confidence: 99%
“…The proposed model fused multimodal information by simply concatenating the outputs from the pre-trained transformer models; therefore, the number of modalities it can handle depends on the number of existing pre-trained models. For example, pre-trained transformer models exist for human poses [39], biological signals [40], videos [21], [41], [42], and robot dynamics [43]. Consequently, on combining these pre-trained models and using the transformer layers on top, as in the proposed model, the model can be easily fine-tuned to a multimodal task.…”
Section: E Pre-trained Models For Other Modalitiesmentioning
confidence: 99%