2022
DOI: 10.48550/arxiv.2202.08432
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AKB-48: A Real-World Articulated Object Knowledge Base

Abstract: Figure 1. AKB-48 consists of 2,037 articulated object models of 48 categories scanned from the real world. The objects are annotated with ArtiKG, and can support a full task spectrum from computer vision to robotics manipulation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…The knowledge base provides the semantic context for the robots' input and output in their tasks, including defining the meaning or function of the manipulated objects. The early robot knowledge bases primarily focused on static objects, such as RoboEarth [1,2], KnowRob [3,4], RoboBrain [5], and the recently developed articulated object knowledge base AKB-48 [6]. However, all the aforementioned knowledge bases for robots are large-scale static repositories primarily focused on describing stationary objects in robotic manipulation tasks.…”
Section: Knowledge Representation In Robotic Manipulationmentioning
confidence: 99%
“…The knowledge base provides the semantic context for the robots' input and output in their tasks, including defining the meaning or function of the manipulated objects. The early robot knowledge bases primarily focused on static objects, such as RoboEarth [1,2], KnowRob [3,4], RoboBrain [5], and the recently developed articulated object knowledge base AKB-48 [6]. However, all the aforementioned knowledge bases for robots are large-scale static repositories primarily focused on describing stationary objects in robotic manipulation tasks.…”
Section: Knowledge Representation In Robotic Manipulationmentioning
confidence: 99%
“…More recently, there has been increasing interest in data-driven methods for studying articulated objects and estimating motion parameters [13,34]. To support these data-driven approaches, there has been concurrent development of datasets of annotated part articulations for synthetic [34,38,41] and reconstructed [15,18] 3D objects. predicts segmentation together with motion parameters, for 2.5D inputs [41,43], 3D point clouds [34] or for sequences of RGBD scans [12].…”
Section: Related Workmentioning
confidence: 99%