2018
DOI: 10.1007/978-3-030-00111-7_12
|View full text |Cite
|
Sign up to set email alerts
|

Acquiring Knowledge of Object Arrangements from Human Examples for Household Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Why did the human want the juice and what is a fitting alternative?) • Location Detection (Welke et al, 2013): Categorize the location based on the recognized objects (e.g., the robot detects milk and juice and concludes that the location is a fridge) • Navigation (Shylaja et al, 2013;Li et al, 2022): Navigate to a specific location • Object Delivery (Lam et al, 2012;Riazuelo et al, 2013;Mühlbacher and Steinbauer, 2014;Al-Moadhen et al, 2015;Zhang and Stone, 2015;Wang et al, 2019;Yang et al, 2019): Finding the requested object and delivering it to a specific location • Object Localization (Varadarajan and Vincze, 2012b;Zhou et al, 2012;Kaiser et al, 2014;Riazuelo et al, 2015;Jebbara et al, 2018;Daruna et al, 2019;Zhang et al, 2019;Chernova et al, 2020): Finding a specific object in an (unknown) environment • Object Recognition (Daoutis et al, 2012;Pratama et al, 2014;Kümpel et al, 2020;Chiatti et al, 2022): Recognize a specific object based on its properties • Pick and Place (Al-Moadhen et al, 2013;Javia and Cimiano, 2016;Mitrevski et al, 2021): Pick an object up and place it at a different location • Reminiscence Therapy (Wu et al, 2019): Asking questions about provided pictures to get the human to remember and socialize • Table Setting (Salinas Pinacho et al, 2018;Haidu and Beetz, 2019): Set the table for a meal scenario (and maybe also clean up afterwards) • Tidy Up (Aker et al, 2012;Skulkittiyut et al, 2013): Bring a specified part of the environment in order by removing unusual objects • Tool Substitution (Zhu et al, 2015;Thosar et al, 2020;2021;…”
Section: Use Cases and Their Application Domainmentioning
confidence: 99%
“…Why did the human want the juice and what is a fitting alternative?) • Location Detection (Welke et al, 2013): Categorize the location based on the recognized objects (e.g., the robot detects milk and juice and concludes that the location is a fridge) • Navigation (Shylaja et al, 2013;Li et al, 2022): Navigate to a specific location • Object Delivery (Lam et al, 2012;Riazuelo et al, 2013;Mühlbacher and Steinbauer, 2014;Al-Moadhen et al, 2015;Zhang and Stone, 2015;Wang et al, 2019;Yang et al, 2019): Finding the requested object and delivering it to a specific location • Object Localization (Varadarajan and Vincze, 2012b;Zhou et al, 2012;Kaiser et al, 2014;Riazuelo et al, 2015;Jebbara et al, 2018;Daruna et al, 2019;Zhang et al, 2019;Chernova et al, 2020): Finding a specific object in an (unknown) environment • Object Recognition (Daoutis et al, 2012;Pratama et al, 2014;Kümpel et al, 2020;Chiatti et al, 2022): Recognize a specific object based on its properties • Pick and Place (Al-Moadhen et al, 2013;Javia and Cimiano, 2016;Mitrevski et al, 2021): Pick an object up and place it at a different location • Reminiscence Therapy (Wu et al, 2019): Asking questions about provided pictures to get the human to remember and socialize • Table Setting (Salinas Pinacho et al, 2018;Haidu and Beetz, 2019): Set the table for a meal scenario (and maybe also clean up afterwards) • Tidy Up (Aker et al, 2012;Skulkittiyut et al, 2013): Bring a specified part of the environment in order by removing unusual objects • Tool Substitution (Zhu et al, 2015;Thosar et al, 2020;2021;…”
Section: Use Cases and Their Application Domainmentioning
confidence: 99%
“…For example, in the robotic domain, particularly in bridging physical events to concepts thereof, it is imperative to recognise that interpretations take place: Any characterization of an objective occurrence unexceptionally depends on the observers' subjective narrative [49]. Such an interpretive view has been employed with great success in SOMA-flavoured NEEMs [10,11,12,13,14] and has also been argued to be propitious for classifying mental processes [50]. SOMA enforces this stance by building upon the foundational ontology DUL, which consistently distinguishes PhysicalEntities from SocialEntities that exist "for the sake of [.…”
Section: Ontological Groundingmentioning
confidence: 99%
“…For different learning tasks, selected parts can then later be queried via the free Open-EASE platform [10]. This has proved useful, e.g., for learning action parameterization [11,12], learning common-sense knowledge from humans in VR [13], and transferring experiences between robots and affordances to novel objects [14].…”
Section: Introductionmentioning
confidence: 99%
“…Property extraction and creation methods, between objects in a household environment, have been implemented in many robotic platforms [8,22,33]. Usually an object identification is done based on the shape and the dimensions perceived by the vision module, or in some cases [2,31] reasoning mechanisms such as grasping area segmentation, or a physics based module contribute to understand an object's label.…”
Section: Related Workmentioning
confidence: 99%
“…Ontologies have been used in many cognitive robotic systems which perform object identification [8,22,31], affordances detection (i.e. the functionality of an object) [2,16,25], and for robotic platforms that work as caretakers for people in a household environment [20,34].…”
Section: Introductionmentioning
confidence: 99%