Objective: Peer social functioning difficulties characteristic of ADHD persist into adolescence, but the efficacy of interventions for this age group remains unclear. Method: A systematic search of nonpharmacological interventions for adolescents with ADHD (10–18 years) identified 11 trials addressing social functioning, of which eight were included in meta-analyses. Results: Random effects meta-analyses of four randomized trials found no differences in social functioning between treatment and control groups by parent- ( g = −0.08 [−0.34, 0.19], k = 4, N = 354) or teacher-report ( g = 0.17 [−0.06, 0.40], k = 3, N = 301). Meta-analyses of nonrandomized studies indicated participants’ social functioning improved from baseline to postintervention by parent-report, but not teacher- or self-report. All trials had a high risk of bias. Conclusion: These results highlight the paucity of research in this age group. There is little evidence that current interventions improve peer social functioning. Clearer conceptualizations of developmentally relevant targets for remediation may yield more efficacious social interventions.
IntroductionParents shape child emotional competence and mental health via their beliefs about children’s emotions, emotion-related parenting, the emotional climate of the family and by modelling emotion regulation skills. However, much of the research evidence to date has been based on small samples with mothers of primary school-aged children. Further research is needed to elucidate the direction and timing of associations for mothers and fathers/partners across different stages of child development. The Child and Parent Emotion Study (CAPES) aims to examine longitudinal associations between parent emotion socialisation, child emotion regulation and socioemotional adjustment at four time points from pregnancy to age 12 years. CAPES will investigate the moderating role of parent gender, child temperament and gender, and family background.Methods and analysisCAPES recruited 2063 current parents from six English-speaking countries of a child 0–9 years and 273 prospective parents (ie, women/their partners pregnant with their first child) in 2018–2019. Participants will complete a 20–30 min online survey at four time points 12 months apart, to be completed in December 2022. Measures include validated parent-report tools assessing parent emotion socialisation (ie, parent beliefs, the family emotional climate, supportive parenting and parent emotion regulation) and age-sensitive measures of child outcomes (ie, emotion regulation and socioemotional adjustment). Analyses will use mixed-effects regression to simultaneously assess associations over three time-point transitions (ie, T1 to T2; T2 to T3; T3 to T4), with exposure variables lagged to estimate how past factors predict outcomes 12 months later.Ethics and disseminationEthics approval was granted by the Deakin University Human Research Ethics Committee and the Deakin University Faculty of Health Human Research Ethics Committee. We will disseminate results through conferences and open access publications. We will invite parent end users to co-develop our dissemination strategy, and discuss the interpretation of key findings prior to publication.Trial registerationProtocol pre-registration: DOI 10.17605/OSF.IO/NGWUY.
Behavioral studies have shown that the ability to discriminate between non-native speech sounds improves after seeing how the sounds are articulated. This study examined the influence of visual articulatory information on the neural correlates of non-native speech sound discrimination. English speakers' discrimination of the Hindi dental and retroflex sounds was measured using the mismatch negativity (MMN) eventrelated potential, before and after they completed one of three 8-min training conditions. In an audio-visual speech training condition (n = 14), each sound was presented with its corresponding visual articulation. In one control condition (n = 14), both sounds were presented with the same visual articulation, resulting in one congruent and one incongruent audio-visual pairing. In another control condition (n = 14), both sounds were presented with the same image of a still face. The control conditions aimed to rule out the possibility that the MMN is influenced by non-specific audio-visual pairings, or by general exposure to the dental and retroflex sounds over the course of the study. The results showed that audio-visual speech training reduced the latency of the MMN but did not affect MMN amplitude. No change in MMN amplitude or latency was observed for the two control conditions. The pattern of results suggests that a relatively short audiovisual speech training session (i.e., 8 min) may increase the speed with which the brain processes non-native speech sound contrasts. The absence of a training effect on MMN amplitude suggests a single session of audio-visual speech training does not lead to the formation of more discrete memory traces for non-native speech sounds. Longer and/or multiple sessions might be needed to influence the MMN amplitude.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.