2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
DOI: 10.1109/iros51168.2021.9636791
|View full text |Cite
|
Sign up to set email alerts
|

Ontology-Assisted Generalisation of Robot Action Execution Knowledge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Why did the human want the juice and what is a fitting alternative?) • Location Detection (Welke et al, 2013): Categorize the location based on the recognized objects (e.g., the robot detects milk and juice and concludes that the location is a fridge) • Navigation (Shylaja et al, 2013;Li et al, 2022): Navigate to a specific location • Object Delivery (Lam et al, 2012;Riazuelo et al, 2013;Mühlbacher and Steinbauer, 2014;Al-Moadhen et al, 2015;Zhang and Stone, 2015;Wang et al, 2019;Yang et al, 2019): Finding the requested object and delivering it to a specific location • Object Localization (Varadarajan and Vincze, 2012b;Zhou et al, 2012;Kaiser et al, 2014;Riazuelo et al, 2015;Jebbara et al, 2018;Daruna et al, 2019;Zhang et al, 2019;Chernova et al, 2020): Finding a specific object in an (unknown) environment • Object Recognition (Daoutis et al, 2012;Pratama et al, 2014;Kümpel et al, 2020;Chiatti et al, 2022): Recognize a specific object based on its properties • Pick and Place (Al-Moadhen et al, 2013;Javia and Cimiano, 2016;Mitrevski et al, 2021): Pick an object up and place it at a different location • Reminiscence Therapy (Wu et al, 2019): Asking questions about provided pictures to get the human to remember and socialize • Table Setting (Salinas Pinacho et al, 2018;Haidu and Beetz, 2019): Set the table for a meal scenario (and maybe also clean up afterwards) • Tidy Up (Aker et al, 2012;Skulkittiyut et al, 2013): Bring a specified part of the environment in order by removing unusual objects • Tool Substitution (Zhu et al, 2015;Thosar et al, 2020;2021;…”
Section: Use Cases and Their Application Domainmentioning
confidence: 99%
“…Why did the human want the juice and what is a fitting alternative?) • Location Detection (Welke et al, 2013): Categorize the location based on the recognized objects (e.g., the robot detects milk and juice and concludes that the location is a fridge) • Navigation (Shylaja et al, 2013;Li et al, 2022): Navigate to a specific location • Object Delivery (Lam et al, 2012;Riazuelo et al, 2013;Mühlbacher and Steinbauer, 2014;Al-Moadhen et al, 2015;Zhang and Stone, 2015;Wang et al, 2019;Yang et al, 2019): Finding the requested object and delivering it to a specific location • Object Localization (Varadarajan and Vincze, 2012b;Zhou et al, 2012;Kaiser et al, 2014;Riazuelo et al, 2015;Jebbara et al, 2018;Daruna et al, 2019;Zhang et al, 2019;Chernova et al, 2020): Finding a specific object in an (unknown) environment • Object Recognition (Daoutis et al, 2012;Pratama et al, 2014;Kümpel et al, 2020;Chiatti et al, 2022): Recognize a specific object based on its properties • Pick and Place (Al-Moadhen et al, 2013;Javia and Cimiano, 2016;Mitrevski et al, 2021): Pick an object up and place it at a different location • Reminiscence Therapy (Wu et al, 2019): Asking questions about provided pictures to get the human to remember and socialize • Table Setting (Salinas Pinacho et al, 2018;Haidu and Beetz, 2019): Set the table for a meal scenario (and maybe also clean up afterwards) • Tidy Up (Aker et al, 2012;Skulkittiyut et al, 2013): Bring a specified part of the environment in order by removing unusual objects • Tool Substitution (Zhu et al, 2015;Thosar et al, 2020;2021;…”
Section: Use Cases and Their Application Domainmentioning
confidence: 99%
“…FOON [8] is a structured method for representing the knowledge the models' objects and their movements in manipulation tasks, constructed through the manual annotation of instructional videos. Instead of a comprehensive symbolic representation system, several works apply knowledge to different robotics tasks, such as vision [9], grasping [10,11], assembly [12], and path planning [13,14], to describe robots' behavior processes. Furthermore, the utilization of knowledge graph embedding enables inference and finds application in the field of robotics [15].…”
Section: Knowledge Representation In Robotic Manipulationmentioning
confidence: 99%
“…To generalize the obtained models, they consider an ontology to find how closely related these objects are, but they don't consider object variables like the material. A related approach [15] defines object clusters with similar action success probability via ontologies. However, they also note that other information like object affordances or object materials could be used for generalization.…”
Section: B Learning Explainable Models Of Cause-effect Relationsmentioning
confidence: 99%