2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN) 2017
DOI: 10.1109/icufn.2017.7993842
|View full text |Cite
|
Sign up to set email alerts
|

Kinect depth sensor for computer vision applications in autonomous vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Instead, they rely on other cues to estimate depth, such as object size, perspective, and shadows. Moreover, some articles proposed adding RFID [10] as a complement to the surveillance camera for distance estimation, or additional sensors such as a LiDAR sensor [11] or Kinect sensor [12]. Given that our system employs a single camera for capturing maritime traffic and adding a secondary sensor is not possible, our research narrows its focus to monocular camera distance estimation methods.…”
Section: Related Workmentioning
confidence: 99%
“…Instead, they rely on other cues to estimate depth, such as object size, perspective, and shadows. Moreover, some articles proposed adding RFID [10] as a complement to the surveillance camera for distance estimation, or additional sensors such as a LiDAR sensor [11] or Kinect sensor [12]. Given that our system employs a single camera for capturing maritime traffic and adding a secondary sensor is not possible, our research narrows its focus to monocular camera distance estimation methods.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, the ToF camera is more commonly used as a data acquisition device. Since the depth cameras work well in low light and even in dark conditions, object recognition based on obtained 3D images has been used in many scenes, such as human detection, industrial assembly, gesture recognition, and others [ 22 , 23 , 24 , 25 , 26 , 27 , 28 ]. Luna et al [ 29 ] presented a new method for detecting people only using depth images, and the data was captured by a depth camera in a frontal position.…”
Section: Introductionmentioning
confidence: 99%
“…Kinect, LIDAR, SONAR, optical flow and stereo camera sensors are widely used for depth estimation (see [12], [13]) and hence these can be potentially used for obstacle avoidance as well without resorting to computation-intensive approaches like SLAM and SfM. However these sophisticated sensors are expensive and add unnecessary burden to the UAV in terms of weight as well as consumption of power.…”
Section: Introductionmentioning
confidence: 99%