2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.244
|View full text |Cite
|
Sign up to set email alerts
|

Leaf Counting with Deep Convolutional and Deconvolutional Networks

Abstract: In this paper, we investigate the problem of counting rosette leaves from an RGB image, an important task in plant phenotyping. We propose a data-driven approach for this task generalized over different plant species and imaging setups. To accomplish this task, we use state-of-theart deep learning architectures: a deconvolutional network for initial segmentation and a convolutional network for leaf counting. Evaluation is performed on the leaf counting challenge dataset at CVPPP-2017. Despite the small number … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
137
0
2

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 142 publications
(140 citation statements)
references
References 29 publications
1
137
0
2
Order By: Relevance
“… A paired t ‐test between our method and Aich and Stavness (the only two approaches from the CVPPP Workshop 2017) shows statistically significant differences ( p value <0.0001). …”
Section: Resultsmentioning
confidence: 87%
See 3 more Smart Citations
“… A paired t ‐test between our method and Aich and Stavness (the only two approaches from the CVPPP Workshop 2017) shows statistically significant differences ( p value <0.0001). …”
Section: Resultsmentioning
confidence: 87%
“…Note that the single input model of our deep architecture achieved the best results on the CVPPP 2017 dataset in the LCC. A paired t ‐test shows statistically significant gains when compared with Aich and Stavness () ( P ‐value <0.0001; last column of Table ). Figure collates results across all images as: (i) the correlation between ground truth and prediction, showing the high agreement of our method ( R 2 = 0.96); (ii) the distribution of error in leaf count, where it can be seen that in about 80% of cases the error is confined within the ±1 leaf range (for comparison Giuffrida et al .…”
Section: Resultsmentioning
confidence: 98%
See 2 more Smart Citations
“…Approaches to extracting quantitative or qualitative measurements of plant traits from images or other sensor data (such as point clouds produced by LIDAR sensors) can be broadly divided into two categories: 1) those where the programmer or scientist is responsible for telling the computer how to process the images [5] and 2) those where the computer learns how to process images from sets of annotated training data [6,7]. Both approaches require a combination of image/sensor data and ground truth measurements.…”
Section: Introductionmentioning
confidence: 99%