Objective: Vocal turn-taking is an important predictor of language development in children with and without hearing loss. Most studies have examined vocal turn-taking in mother-child dyads without considering the multitalker context in a child's life. The present study investigates the quantity of vocal turns between deaf and hard-of-hearing children and multiple members of their social environment.Design: Participants were 52 families with children who used hearing aids (HA, mean age 26.3 mo) or cochlear implants (CI, mean age 63.2 mo) and 27 families with normal-hearing (NH, mean age 26.6 mo) children. The Language ENvironment Analysis system estimated the number of conversational turns per hour (CTC/hr) between all family members (i.e., adult female, adult male, target child, and other child) during fullday recordings over a period of about 1 year. Results:The CTC/hr was lower between the target child and the adult female or adult male in the CI compared with the HA and NH groups. Initially, CTC/hr was higher between the target child and the adult female than between the adult male or the other child. As the child's age increased, turn-taking between the target child and the adult female increased in comparison to that between the target child and the adult male. Over time, turn-taking between the target child and the other child increased and exceeded turn-taking between the target child and the adult caregivers. The increase was observed earlier in families with siblings compared with those without. Conclusions:The quantity of vocal turn-taking depends on the degree of child hearing loss and the relationship between the children and the members of their social environment. Longitudinally, the positive effect of an assistive device on the quantity of turns between the children and their family members was found. The effect was stronger in families with siblings.
Students of color, particularly women of color, face substantial barriers in STEM disciplines in higher education due to social isolation and interpersonal, technological, and institutional biases. For example, online exam proctoring software often uses facial detection technology to identify potential cheating behaviors. Undetected faces often result in flagging and notifying instructors of these as “suspicious” instances needing manual review. However, facial detection algorithms employed by exam proctoring software may be biased against students with certain skin tones or genders depending on the images employed by each company as training sets. This phenomenon has not yet been quantified nor is it readily accessible from the companies that make this type of software. To determine if the automated proctoring software adopted at our institution and which is used by at least 1,500 universities nationally, suffered from a racial, skin tone, or gender bias, the instructor outputs from ∼357 students from four courses were examined. Student data from one exam in each course was collected, a high-resolution photograph was used to manually categorize skin tone, and the self-reported race and sex for each student was obtained. The likelihood that any groups of students were flagged more frequently for potential cheating was examined. The results of this study showed a significant increase in likelihood that students with darker skin tones and Black students would be marked as more in need of instructor review due to potential cheating. Interestingly, there were no significant differences between male and female students when considered in aggregate but, when examined for intersectional differences, women with the darkest skin tones were far more likely than darker skin males or lighter skin males and females to be flagged for review. Together, these results suggest that a major automated proctoring software may employ biased AI algorithms that unfairly disadvantage students. This study is novel as it is the first to quantitatively examine biases in facial detection software at the intersection of race and sex and it has potential impacts in many areas of education, social justice, education equity and diversity, and psychology.
Do children with hearing loss use infant-directed speech? The study examined speech characteristics of a 6-year-old child with bilateral cochlear implants and an age-matched child with normal-hearing while interacting with their infant siblings (age 29 and 20 months) and with their mothers. Child-sibling and child-mother interactions were recorded in two conditions. In the “toy” condition, the children explained to their siblings and their mothers how to assemble a toy. In the “book” condition, the children narrated a story using a picture book. Sixty-five vocalizations from each child’s speech sample were extracted in each condition. Mean fundamental frequency, fundamental frequency range, utterance duration, number of syllables per utterance, and speech rate were measured. Both children produced higher fundamental frequency, expanded fundamental frequency range, shorter utterance duration, and slower speech rate in the sibling- compared to mother-directed speech in both the “book” and “toy” conditions. For the mother-directed speech only, the children produced lower fundamental frequency, longer utterance duration and more syllables per utterance in the “book” than the “toy” condition. The results suggest that children with and without hearing loss modify prosodic characteristics of their speech when interacting with a younger sibling but the strength of the modification may be task-dependent.
How does hearing loss affect vocal turn-taking within families? This study examined turn-taking between children and multiple members of their social environment. For children with hearing loss, we also examined potential differences by device, hearing aids (HA) versus cochlear implants (CI). Daylong audio recordings were obtained monthly for about a year using a wearable recorder. Conversational turns per hour (CTC/hr) between children with- and without hearing loss and their family members were estimated by automated speech processing. Results indicate that the CI children engaged in fewer CTC/hr with adult caregivers compared to the HA and normal-hearing groups. Initially, a higher CTC/hr between the target child and the adult female was observed compared to the adult male or the other child. With child age, turn taking between the target child and the adult female increased relative to the target child and the adult male. Over time, CTC/hr between the target child and other child exceeded turn taking between others. This increase occurred earlier in families with siblings. Results suggest vocal turn-taking between family members depends on the degree of child hearing loss and relations within the family. Longitudinally, there was a positive effect of assistive device on the quantity of turns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.