2016
DOI: 10.1109/lra.2015.2506119
|View full text |Cite
|
Sign up to set email alerts
|

Introducing Geometric Constraint Expressions Into Robot Constrained Motion Specification and Control

Abstract: Abstract- T HE problem of robotic task definition and execution was pioneered by Mason, [1], who defined setpoint constraints where the position, velocity, and/or forces are expressed in one particular task frame for a 6-DOF robot. Later extensions generalized this approach to constraints in i) multiple frames, ii) redundant robots, iii) other sensor spaces such as cameras, and iv) trajectory tracking. Our work extends tasks definition to i) expressions of constraints, with a focus on expressions between geome… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 22 publications
(36 reference statements)
0
3
0
Order By: Relevance
“…A user can declaratively specify the task a robot must accomplish using a variety of different methods such as geometric constraints [22], temporal logic [23], or expression graphs [24]. Such approaches complement our work here, as a user can then specify both task and environment formally, and automatically verify their robotic system end to end (as in e.g., [25], [26]).…”
Section: Related Literaturementioning
confidence: 97%
“…A user can declaratively specify the task a robot must accomplish using a variety of different methods such as geometric constraints [22], temporal logic [23], or expression graphs [24]. Such approaches complement our work here, as a user can then specify both task and environment formally, and automatically verify their robotic system end to end (as in e.g., [25], [26]).…”
Section: Related Literaturementioning
confidence: 97%
“…The consideration of geometry in manipulation skills has been investigated early on in robotics, including the pioneer work of Mason formalizing force control based on task geometry [12]. In most applications, three priority levels of safety, primary and auxiliary constraints constructed from task geometric primitives suffice to describe manipulation features [13].…”
Section: Related Workmentioning
confidence: 99%
“…The modelling approach chosen to represent the skills greatly affects the variety of possible skills and their adaptability to different hardware and environments. Constraint-based skill models offer a powerful and flexible choice, allowing us to model geometric constraints on the configuration and operational spaces [4,5,6,7], allowable velocities [8], and also forces and torques [9,10,11,12,13]. Also, constraint-based approaches have proven to be amenable to semantic modelling using ontologies [14] and Authors Affiliation: 1 A*STAR Institute for Infocomm Research (I2R), Singapore, 2 A*STAR Human-centric AI (CHEEM) Programme, Institute of High Performance Computing, Singapore, 3 A*STAR Advanced Remanufacturing and Technology Centre (ARTC), Singapore.…”
Section: Introductionmentioning
confidence: 99%