Floral nectar is a rich secretion produced by the nectary gland and is offered as reward to attract pollinators leading to improved seed set. Nectars are composed of a complex mixture of sugars, amino acids, proteins, vitamins, lipids, organic and inorganic acids. This composition is influenced by several factors, including floral morphology, mechanism of nectar secretion, time of flowering, and visitation by pollinators. The objective of this study was to determine the contributions of flowering time, plant phylogeny, and pollinator selection on nectar composition in Nicotiana. The main classes of nectar metabolites (sugars and amino acids) were quantified using gas chromatography/mass spectrometric analytical platforms to identify differences among fifteen Nicotiana species representing day- and night-flowering plants from ten sections of the genus that are visited by five different primary pollinators. The nectar metabolomes of different Nicotiana species can predict the feeding preferences of the target pollinator(s) of each species, and the nectar sugars (i.e., glucose, fructose, and sucrose) are a distinguishing feature of Nicotiana species phylogeny. Moreover, comparative statistical analysis indicate that pollinators are a stronger determinant of nectar composition than plant phylogeny.
High-throughput phenotyping enables the efficient collection of plant trait data at scale. One example involves using imaging systems over key phases of a crop growing season. Although the resulting images provide rich data for statistical analyses of plant phenotypes, image processing for trait extraction is required as a prerequisite. Current methods for trait extraction are mainly based on supervised learning with human labeled data or semisupervised learning with a mixture of human labeled data and unsupervised data. Unfortunately, preparing a sufficiently large training data is both time and labor-intensive. We describe a self-supervised pipeline (KAT4IA) that uses K-means clustering on greenhouse images to construct training data for extracting and analyzing plant traits from an image-based field phenotyping system. The KAT4IA pipeline includes these main steps: self-supervised training set construction, plant segmentation from images of field-grown plants, automatic separation of target plants, calculation of plant traits, and functional curve fitting of the extracted traits. To deal with the challenge of separating target plants from noisy backgrounds in field images, we describe a novel approach using row-cuts and column-cuts on images segmented by transform domain neural network learning, which utilizes plant pixels identified from greenhouse images to train a segmentation model for field images. This approach is efficient and does not require human intervention. Our results show that KAT4IA is able to accurately extract plant pixels and estimate plant heights.
High-throughput phenotyping is a modern technology to measure plant traits efficiently and in large scale by imaging systems over the whole growth season. Those images provide rich data for statistical analysis of plant phenotypes. We propose a pipeline to extract and analyze the plant traits for field phenotyping systems. The proposed pipeline include the following main steps: plant segmentation from field images, automatic calculation of plant traits from the segmented images, and functional curve fitting for the extracted traits. To deal with the challenging problem of plant segmentation for field images, we propose a novel approach on image pixel classification by transform domain neural network models, which utilizes plant pixels from greenhouse images to train a segmentation model for field images. Our results show the proposed procedure is able to accurately extract plant heights and is more stable than results from Amazon Turks, who manually measure plant heights from original images.
High-throughput plant phenotyping—the use of imaging and remote sensing to record plant growth dynamics—is becoming more widely used. The first step in this process is typically plant segmentation, which requires a well-labeled training dataset to enable accurate segmentation of overlapping plants. However, preparing such training data is both time and labor intensive. To solve this problem, we propose a plant image processing pipeline using a self-supervised sequential convolutional neural network method for in-field phenotyping systems. This first step uses plant pixels from greenhouse images to segment nonoverlapping in-field plants in an early growth stage and then applies the segmentation results from those early-stage images as training data for the separation of plants at later growth stages. The proposed pipeline is efficient and self-supervising in the sense that no human-labeled data are needed. We then combine this approach with functional principal components analysis to reveal the relationship between the growth dynamics of plants and genotypes. We show that the proposed pipeline can accurately separate the pixels of foreground plants and estimate their heights when foreground and background plants overlap and can thus be used to efficiently assess the impact of treatments and genotypes on plant growth in a field environment by computer vision techniques. This approach should be useful for answering important scientific questions in the area of high-throughput phenotyping.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.