2020
DOI: 10.1016/j.biosystemseng.2020.06.014
|View full text |Cite
|
Sign up to set email alerts
|

Online crop height and density estimation in grain fields using LiDAR

Abstract: LiDAR allows to estimate crop height and density in small grain crop.Crop height cannot be estimated properly at angles to the horizontal below 70 • Variance based crop density estimation is robust against small viewing angle variations.LiDAR data correlate better to ears dry mass than to number of tillers.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(5 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…characteriz. Robotic Operating System (ROS) + Velodyne data acquisition [ 71 ]-2020 wheat, barley SICK LMS111 Terrestrial/Mobile/Tractor Param. characteriz.…”
Section: Table A1mentioning
confidence: 99%
“…characteriz. Robotic Operating System (ROS) + Velodyne data acquisition [ 71 ]-2020 wheat, barley SICK LMS111 Terrestrial/Mobile/Tractor Param. characteriz.…”
Section: Table A1mentioning
confidence: 99%
“…This is consistent with the conclusion proposed by Swayes [17], who applied LiDAR sensors to estimate wheat density, that the crop properties are relative to rice to some extent. The literature shows that accuracy of wheat density estimation could reach an R 2 of 0.8, according to Blanquart [33], who also analyzed the different installment angle of LiDAR sensor. However, as the wheat panicles have less leaves, the separation of wheat earheads is more obvious; moreover, the estimation based on point-cloud data becomes more convenient, compared with the mature rice plants with bent and intersected earheads.…”
Section: Rice-density Estimationmentioning
confidence: 99%
“…Those parameters are most effectively captured by techniques that enable the extraction of spatial depth information about an object. Notable examples of such techniques are Light Detection and Ranging (LiDAR), e.g., [9,10], Structure-from-Motion (SfM), which is the three-dimensional reconstruction of structures using a series of two-dimensional RGB images captured from multiple viewpoints, e.g., [11][12][13], the combination of RGB cameras with depth sensors for the simultaneous capturing of color and depth information, e.g., [14,15], or the combination of several of the above-mentioned methods and carrier platforms, e.g., [16,17]. An approach that utilizes two RGB cameras installed in a stereo-capable orientation is referred to as binocular vision or stereoscopic vision, which closely resembles the depth perception of the human eye.…”
Section: Introductionmentioning
confidence: 99%