2023
DOI: 10.1016/j.compag.2023.108035
|View full text |Cite
|
Sign up to set email alerts
|

RGB-D datasets for robotic perception in site-specific agricultural operations—A survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 69 publications
0
4
0
Order By: Relevance
“…In an extensive literature review on the use of depth data for agricultural applications, Kurtser and Lowry [ 23 ] identified only one paper in which a combination of color and depth data was used for object detection [ 24 ]. They dealt with the use case of apple detection.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In an extensive literature review on the use of depth data for agricultural applications, Kurtser and Lowry [ 23 ] identified only one paper in which a combination of color and depth data was used for object detection [ 24 ]. They dealt with the use case of apple detection.…”
Section: Discussionmentioning
confidence: 99%
“…In a literature review, Kurtser and Lowry [ 23 ] identified 24 papers that applied a combination of both color and depth data (RGB-D data) for robotic perception in precision agriculture. Almost all these papers focused on object detection solely using RGB images.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The significance of point-cloud processing has surged across various domains, such as robotics [1,2], medical field [3,4], autonomous driving [5,6], metrology [7][8][9], etc. Over the past few years, advancements in vision sensors have led to remarkable improvements, enabling these sensors to provide real-time 3D measurements of the surroundings while maintaining decent accuracy [10,11]. Consequently, point-cloud processing forms an essential pivot of numerous application by facilitating robust object detection, segmentation, and classification operations.…”
Section: Introductionmentioning
confidence: 99%