Purpose: The present study examined the effect of open access (OA) status on scholarly and societal metrics of impact (citation counts and alternative metrics, or “altmetrics,” respectively) across manuscripts published in the ASHA Journals.Method: 4941 manuscripts published in four active ASHA journals were grouped across three access statuses based on their availability to the public: gold OA, green OA, and closed access. Two linear mixed-effects models tested the effects of OA status on citation counts and altmetric scores of manuscripts.Results: Gold OA was associated with significantly higher altmetric scores (p < .001) but only marginally higher citation counts (p = .057) compared to closed access manuscripts. No significant differences in citation counts or altmetric scores were observed between green OA and closed access manuscripts.Discussion: CSD research that is fully open receives more online attention and slightly more scientific attention than research that is paywalled or available through alternative OA routes like self-archiving. Additional research is needed to understand secondary variables affecting these and other scholarly and societal metrics of impact across studies in CSD. Ongoing support and incentives to reduce the inequities of OA publishing are critical for continued scientific advancement.
Our current work builds on past research demonstrating that listeners experience a processing cost when hearing speech from multiple talkers compared to a single talker. This processing cost is thought to reflect a normalization process during which listeners adjust the mapping to speech sounds to accommodate talker differences in speech production. In the current studies, we use a speeded word identification paradigm to measure processing time for word recognition in single- vs. mixed-talker blocks, and manipulate within-talker and between-talker variability along both phonetic (e.g., vowel formants) and indexical (e.g., fundamental frequency) dimensions. The results to date suggest that listeners incur processing costs given variability in either dimension, even in single-talker blocks, which raises critical methodological considerations for examining talker normalization in addition to informing theories of talker normalization.
Purpose: This study examined the effect of open access (OA) status on scholarly and societal metrics of impact (citation counts and altmetric scores, respectively) across manuscripts published in the American Speech-Language-Hearing Association (ASHA) Journals. Method: Three thousand four hundred nineteen manuscripts published in four active ASHA Journals were grouped across three access statuses based on their availability to the public: Gold OA, Green OA, and Closed Access. Two linear mixed-effects models tested the effects of OA status on citation counts and altmetric scores of the manuscripts. Results: Both Green OA and Gold OA significantly predicted a 2.70 and 5.21 respective increase in citation counts compared with Closed Access manuscripts ( p < .001). Gold OA was estimated to predict a 25.7-point significant increase in altmetric scores ( p < .001), but Green OA was only marginally significant ( p = .68) in predicting a 1.44 increase in altmetric scores relative to Closed Access manuscripts. Discussion: Communication sciences and disorders (CSD) research that is fully open receives more online attention and, overall, more scientific attention than research that is paywalled or available through Green OA methods. Additional research is needed to understand secondary variables affecting these and other scholarly and societal metrics of impact across studies in CSD. Ongoing support and incentives to reduce the inequities of OA publishing are critical for continued scientific advancement. Open Science Form: https://doi.org/10.23641/asha.21766919
Previous research suggests that learning to use a phonetic property [e.g., voice-onset-time, (VOT)] for talker identity supports a left ear processing advantage. Specifically, listeners trained to identify two “talkers” who only differed in characteristic VOTs showed faster talker identification for stimuli presented to the left ear compared to that presented to the right ear, which is interpreted as evidence of hemispheric lateralization consistent with task demands. Experiment 1 ( n = 97) aimed to replicate this finding and identify predictors of performance; experiment 2 ( n = 79) aimed to replicate this finding under conditions that better facilitate observation of laterality effects. Listeners completed a talker identification task during pretest, training, and posttest phases. Inhibition, category identification, and auditory acuity were also assessed in experiment 1. Listeners learned to use VOT for talker identity, which was positively associated with auditory acuity. Talker identification was not influenced by ear of presentation, and Bayes factors indicated strong support for the null. These results suggest that talker-specific phonetic variation is not sufficient to induce a left ear advantage for talker identification; together with the extant literature, this instead suggests that hemispheric lateralization for talker-specific phonetic variation requires phonetic variation to be conditioned on talker differences in source characteristics.
The goal of the current work was to develop and validate web-based measures for assessing English vocabulary knowledge. Two existing paper-and-pencil assessments, the Vocabulary Size Test (VST) and the Word Familiarity Test (WordFAM), were modified for web-based administration. In Experiment 1, participants (n = 100) completed the web-based VST. In Experiment 2, participants (n = 100) completed the web-based WordFAM. Results from these experiments confirmed that both tasks (1) could be completed online, (2) showed expected sensitivity to English frequency patterns, and (3) revealed high split-half reliability, suggesting that stable vocabulary assessment could be achieved with fewer test items. Based on the results of Experiments 1 and 2, two “brief” versions of the VST and WordFAM were developed. Each version consisted of approximately half of the items from the full assessment, with novel items presented across the two brief versions of each assessment. In Experiment 3, participants (n = 85) completed one brief version of both the VST and WordFAM at session one, followed by the other brief version of each task at session two. The results showed high test-retest reliability for both the VST (r = 0.68) and WordFAM (r = 0.82). The two brief assessments also showed moderate convergent validity (ranging from r = 0.38 to r = 0.59) indicative of construct validity for each assessment. This work provides open-source vocabulary knowledge assessments with normative data that researchers and clinicians can use to foster high quality data collection in web-based environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.