Otoacoustic emissions (OAEs) are useful for studying medial olivocochlear (MOC) efferents, but several unresolved methodological issues cloud the interpretation of the data they produce. Most efferent assays use a ''probe stimulus'' to produce an OAE and an ''elicitor stimulus'' to evoke efferent activity and thereby change the OAE. However, little attention has been given to whether the probe stimulus itself elicits efferent activity. In addition, most studies use only contralateral (re the probe) elicitors and do not include measurements to rule out middle-ear muscle (MEM) contractions. Here we describe methods to deal with these problems and present a new efferent assay based on stimulus frequency OAEs (SFOAEs) that incorporates these methods. By using a postelicitor window, we make measurements in individual subjects of efferent effects from contralateral, ipsilateral, and bilateral elicitors. Using our SFOAE assay, we demonstrate that commonly used probe sounds (clicks, tone pips, and tone pairs) elicit efferent activity, by themselves. Thus, results of efferent assays using these probe stimuli can be confounded by unwanted efferent activation. In contrast, the single 40 dB SPL tone used as the probe sound for SFOAEbased measurements evoked little or no efferent activity. Since they evoke efferent activation, clicks, tone pips, and tone pairs can be used in an adaptation efferent assay, but such paradigms are limited in measurement scope compared to paradigms that separate probe and elicitor stimuli. Finally, we describe tests to distinguish middle-ear muscle (MEM) effects from MOC effects for a number of OAE assays and show results from SFOAE-based tests. The SFOAE assay used in this study provides a sensitive, flexible, frequency-specific assay of medial efferent activation that uses a low-level probe sound that elicits little or no efferent activity, and thus provides results that can be interpreted without the confound of unintended efferent activation.
Background: Many studies have suggested that cognitive training can result in cognitive gains in healthy older adults. We investigated whether personalized computerized cognitive training provides greater benefits than those obtained by playing conventional computer games. Methods: This was a randomized double-blind interventional study. Self-referred healthy older adults (n = 155, 68 ± 7 years old) were assigned to either a personalized, computerized cognitive training or to a computer games group. Cognitive performance was assessed at baseline and after 3 months by a neuropsychological assessment battery. Differences in cognitive performance scores between and within groups were evaluated using mixed effects models in 2 approaches: adherence only (AO; n = 121) and intention to treat (ITT; n = 155). Results: Both groups improved in cognitive performance. The improvement in the personalized cognitive training group was significant (p < 0.03, AO and ITT approaches) in all 8 cognitive domains. However, in the computer games group it was significant (p < 0.05) in only 4 (AO) or 6 domains (ITT). In the AO analysis, personalized cognitive training was significantly more effective than playing games in improving visuospatial working memory (p = 0.0001), visuospatial learning (p = 0.0012) and focused attention (p = 0.0019). Conclusions: Personalized, computerized cognitive training appears to be more effective than computer games in improving cognitive performance in healthy older adults. Further studies are needed to evaluate the ecological validity of these findings.
Automated voice-based detection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) could facilitate the screening for COVID19. A dataset of cellular phone recordings from 88 subjects was recently collected. The dataset included vocal utterances, speech and coughs that were self-recorded by the subjects in either hospitals or isolation sites. All subjects underwent nasopharyngeal swabbing at the time of recording and were labelled as SARS-CoV-2 positives or negative controls. The present study harnessed deep machine learning and speech processing to detect the SARS-CoV-2 positives. A threestage architecture was implemented. A self-supervised attention-based transformer generated embeddings from the audio inputs. Recurrent neural networks were used to produce specialized sub-models for the SARS-CoV-2 classification. An ensemble stacking fused the predictions of the sub-models. Pre-training, bootstrapping and regularization techniques were used to prevent overfitting. A recall of 78% and a probability of false alarm (PFA) of 41% were measured on a test set of 57 recording sessions. A leave-one-speaker-out cross validation on 292 recording sessions yielded a recall of 78% and a PFA of 30%. These preliminary results imply a feasibility for COVID19 screening using voice.
In this article, we describe and interpret a set of acoustic and linguistic features that characterise emotional/emotion-related user states -confined to the one database processed: four classes in a German corpus of children interacting with a pet robot. To this end, we collected a very large feature vector consisting of more than 4000 features extracted at different sites. We performed extensive feature selection (Sequential Forward Floating Search) for seven acoustic and four linguistic types of features, ending up in a small number of 'most important' features which we try to interpret by discussing the impact of different feature and extraction types. We establish different measures of impact and discuss the mutual influence of acoustics and linguistics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.