2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967710
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Visual Servoing from Features Velocity and Acceleration Interaction Models

Abstract: Visual Servoing has been widely investigated in the last decades as it provides a powerful strategy for robot control. Thanks to the direct feedback from a set of sensors, it allows to reduce the impact of some modeling errors and to perform tasks even in uncertain environments. The commonly exploited approach in this field is to use a model that expresses the rate of change of a set of features as a function of sensor twist. These schemes are commonly used to obtain a velocity command, which needs to be track… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…Most works have presented their results on DVS in simulation and tackling just few of the real world issues like measurement noise [6], [7], or instead of generating the torque command from the features acceleration, a velocity signal command is obtained from numerical integration and sent to the robot, as in [19] for an MPC controller. In this work we aimed at pushing forward the state of the art by tackling several implementation aspects of a real implementation not covered in previous works.…”
Section: A Implementation Issuesmentioning
confidence: 99%
See 1 more Smart Citation
“…Most works have presented their results on DVS in simulation and tackling just few of the real world issues like measurement noise [6], [7], or instead of generating the torque command from the features acceleration, a velocity signal command is obtained from numerical integration and sent to the robot, as in [19] for an MPC controller. In this work we aimed at pushing forward the state of the art by tackling several implementation aspects of a real implementation not covered in previous works.…”
Section: A Implementation Issuesmentioning
confidence: 99%
“…In the past decades, most research efforts focused on the modeling of the existing relationships between descriptive features in the image and the motion of the sensor at kinematic level [1], [2], and in the development of control strategies to guide the chosen visual features towards the desired ones as summarized in [3], [4]. Although much research has been conducted in this sense, little attention has been devoted to the use of second order models linking the features accelerations to the robot torques (via the robot dynamical model), with some early attempts dating back more than two decades [1], [5] up to some more recent contributions [6], [7]. As it is well known, explicitly taking into account the robot dynamics allows to design controllers with superior performance, especially for what concerns the regulation of forces and interaction with the environment.…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we instead leverage results on the derivation of the VS dynamics [17], [18] that do not depend on the particular choice of the visual features: this allows us to derive a general framework that can effectively combine vision and force sensing directly in feature space. This differs from previous works on this topic since the derivation of secondorder models has often been formulated ad hoc from the definition of the considered features.…”
Section: Related Workmentioning
confidence: 99%
“…† is the pseudo-inverse of the matrix in the argument. Even though we have considered the manipulator interacting with the environment in our derivation, if we do not have the force measurement, we arrive at an indirect control of the force through a position controller such as the one derived in [17], [18].…”
Section: A Impedance Control In Feature Spacementioning
confidence: 99%
“…with a representing the relative acceleration of the sensor expressed on the camera frame and h s being a function of s, z and v. In particular, this last component can be written as a collection of quadratic forms [16]:…”
Section: A Visual Servoing Modelsmentioning
confidence: 99%