2020 Innovations in Intelligent Systems and Applications Conference (ASYU) 2020
DOI: 10.1109/asyu50717.2020.9259887
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Segmentation for Object Detection and Grasping with Humanoid Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…Another method for detecting objects was introduced in [7], [8]; Maiettini et al [7] used a convolutional neural network (CNN) to train a network in an end-to-end manner on larger datasets with 2D bounding objects, while in [8], a CNN was used to predict the class of an object from the proposed region. Aslan et al [9] introduced semantic segmentation algorithms to a simulation and compared the accuracy, segmentation performance, and number of parameters. A year later, they combined a semantic algorithm with deep reinforcement learning (DRL) to recognize an object moving toward the robot [10].…”
Section: Introductionmentioning
confidence: 99%
“…Another method for detecting objects was introduced in [7], [8]; Maiettini et al [7] used a convolutional neural network (CNN) to train a network in an end-to-end manner on larger datasets with 2D bounding objects, while in [8], a CNN was used to predict the class of an object from the proposed region. Aslan et al [9] introduced semantic segmentation algorithms to a simulation and compared the accuracy, segmentation performance, and number of parameters. A year later, they combined a semantic algorithm with deep reinforcement learning (DRL) to recognize an object moving toward the robot [10].…”
Section: Introductionmentioning
confidence: 99%
“…Some of these methods use visual features in 2D images to localize graspable regions, [2,3] while others use range or depth data for this purpose [4,5,6,7,8], the latter becoming more popular owing to the availability of low-cost RGBD sensors. Recently, the deep learning-based methods are becoming increasingly popular for detecting graspable regions [9,10,11,12,13]. Most of the existing methods for vision-based grasping can be broadly classified into two categories: one that relies on the availability of accurate geometric information about the object (or a CAD model) [14,15,16] making them impractical in several real-world use cases, and the other that computes grasping affordances directly from a RGBD point cloud by harnessing local geometric features without knowing the object identity or its accurate 3D geometry [6,17,18,19].…”
Section: Introductionmentioning
confidence: 99%
“…There are several advantages of this approach. For instance, it does not require any computationally intensive training phase to compute GPD unlike other deep learning methods that attempt to process RGB or RGBD images directly [11,12,13,28,29] and in some cases, make use of simulators like Graspit! [14] to produce examples required for training.…”
Section: Introductionmentioning
confidence: 99%