2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341381
|View full text |Cite
|
Sign up to set email alerts
|

Real-time detection of broccoli crops in 3D point clouds for autonomous robotic harvesting

Abstract: Real-time 3D perception of the environment is crucial for the adoption and deployment of reliable autonomous harvesting robots in agriculture. Using data collected with RGB-D cameras under farm field conditions, we present two methods for processing 3D data that reliably detect mature broccoli heads. The proposed systems are efficient and enable real-time detection on depth data of broccoli crops using the organised structure of the point clouds delivered by a depth sensor. The systems are tested with datasets… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…Multiview approaches can be combined with deep learning techniques to segment and build a 3D point cloud of the plant. Such 3D point clouds offer higher precision when segmenting plant parts into stem and leaves, including individual instances of leaves, when compared to 2D segmentation, which can occasionally classify similar-looking background pixels as foreground (plant). ,, Magistri et al extended the advantages of a 3D point cloud to a temporal association, by using semantic segmentation via an SVM to extract correspondences between point clouds, allowing for phenotype tracking over plant growth …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Multiview approaches can be combined with deep learning techniques to segment and build a 3D point cloud of the plant. Such 3D point clouds offer higher precision when segmenting plant parts into stem and leaves, including individual instances of leaves, when compared to 2D segmentation, which can occasionally classify similar-looking background pixels as foreground (plant). ,, Magistri et al extended the advantages of a 3D point cloud to a temporal association, by using semantic segmentation via an SVM to extract correspondences between point clouds, allowing for phenotype tracking over plant growth …”
Section: Resultsmentioning
confidence: 99%
“…Such 3D point clouds offer higher precision when segmenting plant parts into stem and leaves, including individual instances of leaves, when compared to 2D segmentation, which can occasionally classify similarlooking background pixels as foreground (plant). 103,157,158 Magistri et al extended the advantages of a 3D point cloud to a temporal association, by using semantic segmentation via an SVM to extract correspondences between point clouds, allowing for phenotype tracking over plant growth. 159 To summarize, more sophisticated models which tightly couple shoot dynamics with other components are needed for DCEA.…”
Section: 33mentioning
confidence: 99%
“…Such studies also have targeted only a single plant species to automate conventional farming and targeted only harvesting operations. Various image recognition technologies for automatic harvesting have been proposed for the farm environment and crops [23][24][25][26][27][28][29][30][31]. However, most of these technologies are targeted at conventional farming methods in which a single species is grown, thus lowering the recognition rate in environments where a variety of plants exist in small areas.…”
Section: Relevant Researchmentioning
confidence: 99%
“…LiDAR-based 3D object detection plays an indispensable role in 3D scene understanding with a wide range of applications such as autonomous driving (Deng et al, 2021; and robotics (Ahmed et al, 2018;Montes et al, 2020;. The emerging stream of 3D detection models enables accurate recognition at the cost of large-scale labeled point clouds, where 7-degree of freedom (DOF) 3D bounding boxes -consisting of a position, size, and orientation informationfor each object are annotated.…”
Section: Introductionmentioning
confidence: 99%