This paper analyzes
the dynamics and control of pinch motions generated by a
pair of two multi-degrees-of-freedom robot fingers with soft and deformable
tips pinching a rigid object. It is shown firstly that
passivity analysis leads to an effective design of a feedback
control signal that realizes dynamic stable pinching (grasping), even if
extra terms of Lagrange's multipliers arise from holonomic constraints of
tight area-contacts between soft finger-tips and surfaces of the rigid
object and exert torques and forces on the dynamics. It
is shown secondly that a principle of superposition is applicable
to the design of additional feedback signals for controlling both
the posture (rotational angle) and position (some of task coordinates
of the mass center) of the object provided that the
number of degrees of freedom of each finger is specified
for satisfying a condition of stationary resolution of controlled position
state variables. The details of feedback signals are presented in
the case of a special setup consisting of two robot
fingers with two degrees of freedom.
Upper limb and hand functionality is critical to many activities of daily living, and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. Such multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors including electromyography and inertial measurement units. A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control. In this paper, we present a dataset of this type that was gathered with the anticipation of cameras being built into prosthetic hands, and computer vision methods will need to assess this hand-view visual evidence in order to estimate human intent. Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure. For each trial, based on eye-view images of the scene showing the hand and object on a table, multiple humans were asked to sort in decreasing order of preference, five grasp types appropriate for the object in its given configuration relative to the hand. The potential utility of paired eye-view and hand-view images was illustrated by training a convolutional neural network to process hand-view images in order to predict eye-view labels assigned by humans.
KeywordsMultimodal dataset • Human grasp intent classification • Prosthetic hand • Eye and hand-view images • EMG • Convolutional neural network
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.