2019
DOI: 10.1016/j.neucom.2019.01.088
|View full text |Cite
|
Sign up to set email alerts
|

ObjectFusion: An object detection and segmentation framework with RGB-D SLAM and convolutional neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(10 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…The specific detection process is as follows. Firstly, the selective search algorithm is utilized to select the candidate target regions in the target image, and nearly 2000 candidate target regions are extracted [10,11]. Secondly, the selected 2000 candidate target regions are cut out from the original images, and the cut images are trimmed according to the unified size of 227 × 227.…”
Section: Research Methods Based On Region Proposalmentioning
confidence: 99%
See 1 more Smart Citation
“…The specific detection process is as follows. Firstly, the selective search algorithm is utilized to select the candidate target regions in the target image, and nearly 2000 candidate target regions are extracted [10,11]. Secondly, the selected 2000 candidate target regions are cut out from the original images, and the cut images are trimmed according to the unified size of 227 × 227.…”
Section: Research Methods Based On Region Proposalmentioning
confidence: 99%
“…MS-OPN can be solved by gradient descent method. The optimization objective function is shown in Eq (10).…”
Section: Plos Onementioning
confidence: 99%
“…Accurate depth perception is a prerequisite for several computer vision and robotics applications, such as simultaneous localization and mapping [1]- [3], object recognition [4]- [6], and object semantic segmentation [7]- [9]. Recently, commercial RGB-D cameras (e.g., Kinect, Realsense, ASUS Xtion) have been widely adopted as single-view depth sensors owing to their affordable price and portability.…”
Section: Introductionmentioning
confidence: 99%
“…With the increasing amount of available low-cost consumer-grade depth sensors such as Microsoft Kinect [18], Intel RealSense [19], and depth cameras becoming a standard feature in some flagship mobile phones, we are moving towards an era where RGB-Depth (RGB-D) sensors are as common as regular RGB cameras. Object detection and segmentation with RGB-D sensors has been widely used in recent years, such as Canonical Correlation Analysis (CCA)-based multi-view Convolutional Neural Networks (CNN) [20], using regular point clouds in addition to multi-views for point cloud recognition [21], fusing CNNs with simultaneous localization and mapping in order to perform object segmentation [22], employing multi-modal deep neural networks and Dempster Shafer evidence theory to achieve the task of object recognition [23], or adopting multifoveated point clouds [24].…”
Section: Introductionmentioning
confidence: 99%