2017
DOI: 10.1002/rob.21726
|View full text |Cite
|
Sign up to set email alerts
|

3D‐vision based detection, localization, and sizing of broccoli heads in the field

Abstract: This paper describes a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors, which was developed and evaluated using sensory data collected under real-world field conditions in both the UK and Spain. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning, and temporal filtering methods for detection of broccoli heads. Our experiments… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
76
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 74 publications
(76 citation statements)
references
References 22 publications
0
76
0
Order By: Relevance
“…Machine vision approaches offer significant opportunities for enabling autonomy of robotic systems in food production. Vision-based tasks for crop monitoring include phenotyping [29], classifying when individual plants are ready for harvest [30], and quality analysis [31], e.g. detecting the onset of diseases, all with high throughput data.…”
Section: Robotic Visionmentioning
confidence: 99%
See 1 more Smart Citation
“…Machine vision approaches offer significant opportunities for enabling autonomy of robotic systems in food production. Vision-based tasks for crop monitoring include phenotyping [29], classifying when individual plants are ready for harvest [30], and quality analysis [31], e.g. detecting the onset of diseases, all with high throughput data.…”
Section: Robotic Visionmentioning
confidence: 99%
“…Approaches based on analysis of 3D point clouds, e.g. derived from stereo imagery or RGB-D cameras, offer significant promise to achieve robust perception in challenging agricultural environments [30,35].…”
Section: Robotic Visionmentioning
confidence: 99%
“…Such vision systems typically include a plant classification component to identify individual crop and weed plants, which is then used as a basis for selecting the required treatment. A number of different vision pipelines, typically consisting of vegetation segmentation followed by classification and based on hand‐crafted features, have been proposed (Bosilj, Duckett, & Cielniak, ; Haug, Michaels, Biber, & Ostermann, ; Hemming & Rath, ; Kusumam, Krajník, Pearson, Duckett, & Cielniak, ; Lottes, Hörferlin, Sander, & Stachniss, ). Conversely, convolutional neural networks (CNNs) can automatically determine complex and highly discriminative features directly from images.…”
Section: Introductionmentioning
confidence: 99%
“…The mapping module integrates the individual transformed detections into a global 3d coordinate frame shared by the UAV team. The map building algorithm used was inspired by a system described in Kusumam, Krajník, Pearson, Duckett, and Cielniak (), which proved reliable operation in field conditions. To associate the object detections coming from the perception system with the objects in the map, we assume that the maximal error of object position estimation is below a specific value, which we denote as enormalmnormalanormalx.…”
Section: Colored Target Localization and Motion Estimationmentioning
confidence: 99%