Background Deep learning algorithms for automated plant identification need large quantities of precisely labelled images in order to produce reliable classification results. Here, we explore what kind of perspectives and their combinations contain more characteristic information and therefore allow for higher identification accuracy. Results We developed an image-capturing scheme to create observations of flowering plants. Each observation comprises five in-situ images of the same individual from predefined perspectives (entire plant, flower frontal- and lateral view, leaf top- and back side view). We collected a completely balanced dataset comprising 100 observations for each of 101 species with an emphasis on groups of conspecific and visually similar species including twelve Poaceae species. We used this dataset to train convolutional neural networks and determine the prediction accuracy for each single perspective and their combinations via score level fusion. Top-1 accuracies ranged between 77% (entire plant) and 97% (fusion of all perspectives) when averaged across species. Flower frontal view achieved the highest accuracy (88%). Fusing flower frontal, flower lateral and leaf top views yields the most reasonable compromise with respect to acquisition effort and accuracy (96%). The perspective achieving the highest accuracy was species dependent. Conclusions We argue that image databases of herbaceous plants would benefit from multi organ observations, comprising at least the front and lateral perspective of flowers and the leaf top view. Electronic supplementary material The online version of this article (10.1186/s13007-019-0462-4) contains supplementary material, which is available to authorized users.
Many applications in chemistry, biology and medicine use microfluidic devices to separate, detect and analyze samples on a miniaturized size-level. Fluid flows evolving in channels of only several tens to hundreds of micrometers in size are often of a 3D nature, affecting the tailored transport of cells and particles. To analyze flow phenomena and local distributions of particles within those channels, astigmatic particle tracking velocimetry (APTV) has become a valuable tool, on condition that basic requirements like low optical aberrations and particles with a very narrow size distribution are fulfilled. Making use of the progress made in the field of machine vision, deep neural networks may help to overcome these limiting requirements, opening new fields of applications for APTV and allowing them to be used by nonexpert users. To qualify the use of a cascaded deep convolutional neural network (CNN) for particle detection and position regression, a detailed investigation was carried out starting from artificial particle images with known ground truth to real flow measurements inside a microchannel, using particles with uniand bimodal size distributions. In the case of monodisperse particles, the mean absolute error and standard deviation of particle depth-position of less than and about 1 µm were determined, employing the deep neural network and the classical evaluation method based on the minimum Euclidean distance approach. While these values apply to all particle size distributions using the neural network, they continuously increase towards the margins of the measurement volume of about one order of magnitude for the classical method, if nonmonodisperse particles are used. Nevertheless, limiting the depth of measurement volume in between the two focal points of APTV, reliable flow measurements with low uncertainty are also possible with the classical evaluation method and polydisperse tracer particles. The results of the flow measurements presented herein confirm this finding. The source code of the deep neural network used here is available on https://github.com/SECSY-Group/DNN-APTV.
Defocus particle tracking (DPT) has gained increasing importance for its use to determine particle trajectories in all three dimensions with a single-camera system, as typical for a standard microscope, the workhorse of today's ongoing biomedical revolution. DPT methods derive the depth coordinates of particle images from the different defocusing patterns that they show when observed in a volume much larger than the respective depth of field. Therefore it has become common for state-of-the-art methods to apply image recognition techniques. Two of the most commonly and widely used DPT approaches are the application of (astigmatism) particle image model functions (MF methods) and the normalized cross-correlations between measured particle images and reference templates (CC methods). Though still young in the field, the use of neural networks (NN methods) is expected to play a significant role in future and more complex defocus tracking applications. To assess the different strengths of such defocus tracking approaches, we present in this work a general and objective assessment of their performances when applied to synthetic and experimental images of different degrees of astigmatism, noise levels, and particle image overlapping. We show that MF methods work very well in low-concentration cases, while CC methods are more robust and provide better performance in cases of larger particle concentration and thus stronger particle image overlap. The tested NN methods generally showed the lowest performance, however, in comparison to the MF and CC methods, they are yet in an early stage and have still great potential to develop within the field of DPT.
This work presents the application of droplet-based microfluidics for the cultivation of microspores from Brassica napus using the doubled haploid technology. Under stress conditions (e.g. heat shock) or by chemical...
Defocus methods have become more and more popular for the estimation of the 3D position of particles in flows (Cierpka and Kahler, 2011; Rossi and K ¨ ahler, 2014). Typically the depth positions of particles are ¨ determined by the defocused particle images using image processing algorithms. As these methods allow the determination of all components of the velocity vector in a volume using only a single optical access and a single camera, they are often used in, but not limited to microfluidics. Since almost no additional equipment is necessary they are low-cost methods that are meanwhile widely applied in different fields. To overcome the ambiguity of perfect optical systems, often a cylindrical lens is introduced in the optical system which enhances the differences of the obtained particle images for different depth positions. However, various methods are emerging and it is difficult for non-experienced users to judge what method might be best suited for a given experimental setup. Therefore, the aim of the presentation is a thorough evaluation of the performance of general advanced methods, including also recently presented neural networks (Franchini and Krevor, 2020; Konig et al., 2020) based on typical images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.