2021
DOI: 10.1155/2021/5550850
|View full text |Cite
|
Sign up to set email alerts
|

End‐Effector Pose Estimation in Complex Environments Using Complementary Enhancement and Adaptive Fusion of Multisensor

Abstract: Redundant manipulators are suitable for working in narrow and complex environments due to their flexibility. However, a large number of joints and long slender links make it hard to obtain the accurate end-effector pose of the redundant manipulator directly through the encoders. In this paper, a pose estimation method is proposed with the fusion of vision sensors, inertial sensors, and encoders. Firstly, according to the complementary characteristics of each measurement unit in the sensors, the original data i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…A Root Mean Square Error (RMSE) error between 0.4–0.47 deg and between 1.17–1.22 deg was reported for yaw and pitch angle, respectively, for different elements of the robot. In [ 41 ], Luo et al proposed a fusion pose estimation method (RBF-UKF) for a redundant robot that is based on a multisensory fusion approach that is applied in two phases: a “pre-enhancement” fusion phase where information from a RGB-D camera and a MARG (Magnetic, Angular Rate, and Gravity) sensor are fused with the information from an optical encoder and an adaptive fusion phase where the pose of the robot is predicted and various parameters are adaptively adjusted. Their experimental setup consists of eight modules with 1-DoF serially connected as a redundant manipulator.…”
Section: Introductionmentioning
confidence: 99%
“…A Root Mean Square Error (RMSE) error between 0.4–0.47 deg and between 1.17–1.22 deg was reported for yaw and pitch angle, respectively, for different elements of the robot. In [ 41 ], Luo et al proposed a fusion pose estimation method (RBF-UKF) for a redundant robot that is based on a multisensory fusion approach that is applied in two phases: a “pre-enhancement” fusion phase where information from a RGB-D camera and a MARG (Magnetic, Angular Rate, and Gravity) sensor are fused with the information from an optical encoder and an adaptive fusion phase where the pose of the robot is predicted and various parameters are adaptively adjusted. Their experimental setup consists of eight modules with 1-DoF serially connected as a redundant manipulator.…”
Section: Introductionmentioning
confidence: 99%