2011
DOI: 10.14198/jopha.2011.5.1.05
|View full text |Cite
|
Sign up to set email alerts
|

Robust behavior and perception using hierarchical state machines: a pallet manipulation experiment

Abstract: Abstract-Interacting with simple objects in semi-controlled environments is a rich source of challenging situations for mobile robots, particularly when performing sequential tasks. In this paper we present the computational architecture and results obtained from a pallet manipulation experiment with a real robot. To achieve a good success rate in locating and picking the pallets a set of behaviors is assembled in a hierarchical state machine. The behaviors are arranged in such a way that the global uncertaint… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…In the absence of the proper tools, robots tend to be programmed using hard-coded if-then-else constructs (which are considerably error-prone when the complexity increases) or, at most, fixed plans embedded in state machines (see [34,35]). Using state machines to embed plans makes code more structured, easier to understand and less-likely to contain programming errors in comparison with hard-coded logic.…”
Section: Software-related Issuesmentioning
confidence: 99%
“…In the absence of the proper tools, robots tend to be programmed using hard-coded if-then-else constructs (which are considerably error-prone when the complexity increases) or, at most, fixed plans embedded in state machines (see [34,35]). Using state machines to embed plans makes code more structured, easier to understand and less-likely to contain programming errors in comparison with hard-coded logic.…”
Section: Software-related Issuesmentioning
confidence: 99%
“…In [19,20], the pose of single pallets was estimated based on geometry features extracted from distance data provided by LiDAR with geometry classifiers. In addition, the authors of [21][22][23][24][25][26] obtained pallet pose information based on pallet size and edge features extracted from a color image provided by a camera. To reduce the effect of the environment, pallets were identified with marks attached to the pallet feet in [27][28][29].…”
Section: Introductionmentioning
confidence: 99%