Efficient seed germination and establishment are important traits for field and glasshouse crops. Large-scale germination experiments are laborious and prone to observer errors, leading to the necessity for automated methods. We experimented with five crop species, including tomato, pepper, Brassica, barley, and maize, and concluded an approach for large-scale germination scoring. Here, we present the SeedGerm system, which combines cost-effective hardware and open-source software for seed germination experiments, automated seed imaging, and machine-learning based phenotypic analysis. The software can process multiple image series simultaneously and produce reliable analysis of germination-and establishment-related traits, in both comma-separated values (CSV) and processed images (PNG) formats. In this article, we describe the hardware and software design in detail. We also demonstrate that SeedGerm could match specialists' scoring of radicle emergence. Germination curves were produced based on seed-level germination timing and rates rather than a fitted curve. In particular, by scoring germination across a diverse panel of Brassica napus varieties, SeedGerm implicates a gene important in abscisic acid (ABA) signalling in seeds. We compared SeedGerm with existing methods and concluded that it could have wide utilities in large-scale seed phenotyping and testing, for both research and routine seed technology applications.
Abstract-This work is concerned with generating intelligible audio speech from a video of a person talking. Regression and classification methods are proposed first to estimate static spectral envelope features from active appearance model (AAM) visual features. Two further methods are then developed to incorporate temporal information into the prediction -a feature-level method using multiple frames and a model-level method based on recurrent neural networks. Speech excitation information is not available from the visual signal, so methods to artificially generate aperiodicity and fundamental frequency are developed. These are combined within the STRAIGHT vocoder to produce a speech signal. The various systems are optimised through objective tests before applying subjective intelligibility tests that determine a word accuracy of 85% from a set of human listeners on the GRID audio-visual speech database. This compares favourably with a previous regression-based system that serves as a baseline which achieved a word accuracy of 33%.
This work examines whether visual speech information can be effective within audio masking-based speaker separation to improve the quality and intelligibility of the target speech. Two visual-only methods of generating an audio mask for speaker separation are first developed. These use a deep neural network to map visual speech features to an audio feature space from which both visually-derived binary masks and visuallyderived ratio masks are estimated, before application to the speech mixture. Secondly, an audio ratio masking method forms a baseline approach for speaker separation which is extended to exploit visual speech information to form audio-visual ratio masks. Speech quality and intelligibility tests are carried out on the visual-only, audio-only and audio-visual masking methods of speaker separation at mixing levels from -10dB to +10dB. These reveal substantial improvements in the target speech when applying the visual-only and audio-only masks, but with highest performance occurring when combining audio and visual information to create the audio-visual masks.
This work describes an investigation into the feasibility of producing intelligible audio speech from only visual speech features. The proposed method aims to estimate a spectral envelope from visual features which is then combined with an artificial excitation signal and used within a model of speech production to reconstruct an audio signal. Different combinations of audio and visual features are considered, along with both a statistical method of estimation and a deep neural network. The intelligibility of the reconstructed audio speech is measured by human listeners, and then compared to the intelligibility of the video signal only and when combined with the reconstructed audio.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.