Highlights Inter-laboratory study with 174 participants using STRmix™ CE analysis settings resulted in larger differences in LR than PG software Differences in log(LR) due to MCMC variation were less than one order of magnitude Abstract (max 400 words)An intra and inter-laboratory study using the probabilistic genotyping (PG) software STRmix™ is reported. Two complex mixtures from the PROVEDIt set, analysed on an Applied Biosystems™ 3500 Series Genetic Analyzer, were selected. 174 participants responded.LRs were assigned, the point estimates ranging from 2 × 10 4 to 8 × 10 6 . For Sample 2 (in the order of 2000 rfu for major contributors), LRs ranged from 2 × 10 28 to 2 × 10 29 . Where LRs were calculated, the differences between participants can be attributed to (from largest to smallest impact): varying number of contributors (NoC), the exclusion of some loci within the interpretation, differences in local CE data analysis methods leading to variation in the peaks present and their heights in the input files used, and run-to-run variation due to the random sampling inherent to all MCMC-based methods.This study demonstrates a high level of repeatability and reproducibility among the participants. For those results that differed from the mode, the differences in LR were almost always minor or conservative.
Two experiments used a magnitude estimation paradigm to test whether perception of disfluency is a function of whether the speaker and the listener stutter or do not stutter. Utterances produced by people who stutter were judged as "less fluent," and, critically, this held for apparently fluent utterances as well as for utterances identified as containing disfluency. Additionally, people who stutter tended to perceive utterances as less fluent, independent of who produced these utterances. We argue that these findings are consistent with a view that articulatory differences between the speech of people who stutter and people who do not stutter lead to perceptually relevant vocal differences. We suggest that these differences are detected by the speech self-monitoring system (which uses speech perception) resulting in covert repairs. Our account therefore shares characteristics with the Covert Repair (Postma & Kolk, 1993) and Vicious Circle (Vasić & Wijnen, 2005) hypotheses. It differs from the Covert Repair hypothesis in that it no longer assumes an additional deficit at the phonological planning level. It differs from the Vicious Circle hypothesis in that it no longer attributes hypervigilant monitoring to unknown, external factors. Rather, the self-monitor becomes hypervigilant because the speaker is aware that his/her speech is habitually deviant, even when it is not, strictly speaking, disfluent.
Dialogue move recognition is taken as being representative of a class of spoken language applications where inference about high level semantic meaning is required from lower level acoustic, phonetic or word based features. Topic identication is another such application. In the particular case of inference from words, the multinomial distribution is shown to be inadequate for modelling word frequencies, and the multivariate Poisson is a more reasonable choice. Zipf's law is used to model a prior distribution. This more rigorous mathematical formulation is shown to improve dialogue move classication both subjectively and quantitatively.
There has been little work that attempts to improve the recognition of spontaneous, conversational speech by adding information from a loosely-coupled modality. This study investigated this idea by integrating information from gaze into an ASR system. A probabilistic framework for multimodal recognition was formalised and applied to the specific case of integrating gaze and speech. Gaze-contingent ASR systems were developed from a baseline ASR system by redistributing language model probability mass according to the visual attention. The best performing systems had similar Word Error Rates to the baseline ASR system and showed an increase in keyword spotting accuracy. The key finding was that performance improvements observed were due to increased recognition accuracy for words associated with the visual field but not the current focus of visual attention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.