Graphical AbstractFinely-grained annotated datasets for image-based plant phenotyping Massimo Minervini, Andreas Fischbach, Hanno Scharr, Sotirios A. TsaftarisIn this paper we present a collection of benchmark datasets for the development and evaluation of computer vision and machine learning algorithms in the context of plant phenotyping. We provide annotated imaging data and suggest suitable evaluation criteria for plant/leaf segmentation, detection, tracking as well as classification and regression problems. The Figure symbolically depicts the data available together with ground truth segmentations and further annotations and metadata. Research Highlights• First comprehensive annotated datasets for computer vision tasks in plant phenotyping.• Publicly available data and evaluation criteria for eight challenging tasks.• Tasks include fine-grained categorization of age, developmental stage, and cultivars.• Example test cases and results on plant and leaf-wise segmentation and leaf counting. ABSTRACTImage-based approaches to plant phenotyping are gaining momentum providing fertile ground for several interesting vision tasks where fine-grained categorization is necessary, such as leaf segmentation among a variety of cultivars, and cultivar (or mutant) identification. However, benchmark data focusing on typical imaging situations and vision tasks are still lacking, making it difficult to compare existing methodologies. This paper describes a collection of benchmark datasets of raw and annotated top-view color images of rosette plants. We briefly describe plant material, imaging setup and procedures for different experiments: one with various cultivars of Arabidopsis and one with tobacco undergoing different treatments. We proceed to define a set of computer vision and classification tasks and provide accompanying datasets and annotations based on our raw data. We describe the annotation process performed by experts and discuss appropriate evaluation criteria. We also offer exemplary use cases and results on some tasks obtained with parts of these data. We hope with the release of this rigorous dataset collection to invigorate the development of algorithms in the context of plant phenotyping but also provide new interesting datasets for the general computer vision community to experiment on. Data are publicly available at http://www.plant-phenotyping.org/datasets.
Image-based plant phenotyping is a growing application domain of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants Arabidopsis and young tobacco. Although leaves do share appearance and shape characteristics, the presence of occlusions and variability in leaf shape and pose, as well as imaging conditions, render this problem challenging. The aim of this paper is to compare several leaf segmentation solutions on a unique and first of its kind dataset containing images from typical phenotyping experiments. In particular, we report and discuss methods and findings of a collection of submissions for the first Leaf Segmentation Challenge (LSC) of the Computer Vision Problems in Plant Phenotyping (CVPPP) workshop in 2014. Four methods are presented: three segment leaves via processing the distance transform in an unsupervised fashion, and the other via optimal template selection and Chamfer matching. Overall, we find that although separating plant from background can be achieved with satisfactory accuracy (>90% Dice score), individual leaf segmentation and counting remain challenging when leaves overlap. Besides, accuracy is lower for younger leaves. We find also that variability in datasets does affect outcomes. Our findings motivate further investigations and development of specialized algorithms for this particular application, and that challenges of this form are ideally suited for advancing the state of the art. Data are publicly available (http://www.plantphenotyping.org/CVPPP2014-dataset) to support future challenges beyond segmentation within this application domain.
Counting the number of leaves in plants is important for plant phenotyping, since it can be used to assess plant growth stages. We propose a learning-based approach for counting leaves in rosette (model) plants. We relate image-based descriptors learned in an unsupervised fashion to leaf counts using a supervised regression model. To take advantage of the circular and coplanar arrangement of leaves and also to introduce scale and rotation invariance, we learn features in a log-polar representation. Image patches extracted in this log-polar domain are provided to K-means, which builds a codebook in a unsupervised manner. Feature codes are obtained by projecting patches on the codebook using the triangle encoding, introducing both sparsity and specifically designed representation. A global, per-plant image descriptor is obtained by pooling local features in specific regions of the image. Finally, we provide the global descriptors to a support vector regression framework to estimate the number of leaves in a plant. We evaluate our method on datasets of the Leaf Counting Challenge (LCC), containing images of Arabidopsis and tobacco plants. Experimental results show that on average we reduce absolute counting error by 40% w.r.t. the winner of the 2014 edition of the challenge -a counting via segmentation method. When compared to state-of-the-art density-based approaches to counting, on Arabidopsis image data ∼75% less counting errors are observed. Our findings suggest that it is possible to treat leaf counting as a regression problem, requiring as input only the total leaf count per training image.
Plant phenotyping investigates how a plant's genome, interacting with the environment, affects the observable traits of a plant (phenome). It is becoming increasingly important in our quest towards efficient and sustainable agriculture. While sequencing the genome is becoming increasingly efficient, acquiring phenotype information has remained largely of low throughput. Current solutions for automated image-based plant phenotyping, rely either on semi-automated or manual analysis of the imaging data, or on expensive and proprietary software which accompanies costly hardware infrastructure. While some attempts have been made to create software applications that enable the analysis of such images in an automated fashion, most solutions are tailored to particular acquisition scenarios and restrictions on experimental design. In this paper we propose and test, a method for the segmentation and the automated analysis of time-lapse plant images from phenotyping experiments in a general laboratory setting, that can adapt to scene variability. The method involves minimal user interaction, necessary to establish the statistical experiments that may follow. At every time instance (i.e., a digital photograph), it segments the plants in images that contain many specimens of the same species. For accurate plant segmentation we propose a vector valued level set formulation that incorporates features of color intensity, local texture, and prior knowledge. Prior knowledge is incorporated using a plant appearance model implemented with Gaussian mixture models, which utilizes incrementally information from previously segmented instances. The proposed approach is tested on Arabidopsis plant images acquired with a static camera capturing many subjects at the same time. Our validation with ground truth segmentations and comparisons with state-of-the-art methods in the literature shows that the proposed method is able to handle images with complicated and changing background in an automated fashion. An accuracy of 96.7% (dice similarity coefficient) was observed, which was higher than other methods used for comparison. While here it was tested on a single plant species, the fact that we do not employ shape driven models and we do not rely on fully supervised classification (trained on a large dataset) increases the ease of deployment of the proposed solution for the study of different plant species in a variety of laboratory settings. Our solution will be accompanied by an easy to use graphical user interface and, to facilitate adoption, we will make the software available to the scientific community.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.