Using data from eight UK cohorts participating in the Healthy Ageing across the Life Course (HALCyon) research programme, with ages at physical capability assessment ranging from 50 to 90+ years, we harmonised data on objective measures of physical capability (i.e. grip strength, chair rising ability, walking speed, timed get up and go, and standing balance performance) and investigated the cross-sectional age and gender differences in these measures. Levels of physical capability were generally lower in study participants of older ages, and men performed better than women (for example, results from meta-analyses (N = 14,213 (5 studies)), found that men had 12.62 kg (11.34, 13.90) higher grip strength than women after adjustment for age and body size), although for walking speed, this gender difference was attenuated after adjustment for body size. There was also evidence that the gender difference in grip strength diminished with increasing age,whereas the gender difference in walking speed widened (p<0.01 for interactions between age and gender in both cases). This study highlights not only the presence of age and gender differences in objective measures of physical capability but provides a demonstration that harmonisation of data from several large cohort studies is possible. These harmonised data are now being used within HALCyon to understand the lifetime social and biological determinants of physical capability and its changes with age.
IntroductionAcute kidney injury (AKI) risk prediction scores are an objective and transparent means to enable cohort enrichment in clinical trials or to risk stratify patients preoperatively. Existing scores are limited in that they have been designed to predict only severe, or non-consensus AKI definitions and not less severe stages of AKI, which also have prognostic significance. The aim of this study was to develop and validate novel risk scores that could identify all patients at risk of AKI.MethodsProspective routinely collected clinical data (n = 30,854) were obtained from 3 UK cardiac surgical centres (Bristol, Birmingham and Wolverhampton). AKI was defined as per the Kidney Disease: Improving Global Outcomes (KDIGO) Guidelines. The model was developed using the Bristol and Birmingham datasets, and externally validated using the Wolverhampton data. Model discrimination was estimated using the area under the ROC curve (AUC). Model calibration was assessed using the Hosmer–Lemeshow test and calibration plots. Diagnostic utility was also compared to existing scores.ResultsThe risk prediction score for any stage AKI (AUC = 0.74 (95% confidence intervals (CI) 0.72, 0.76)) demonstrated better discrimination compared to the Euroscore and the Cleveland Clinic Score, and equivalent discrimination to the Mehta and Ng scores. The any stage AKI score demonstrated better calibration than the four comparison scores. A stage 3 AKI risk prediction score also demonstrated good discrimination (AUC = 0.78 (95% CI 0.75, 0.80)) as did the four comparison risk scores, but stage 3 AKI scores were less well calibrated.ConclusionsThis is the first risk score that accurately identifies patients at risk of any stage AKI. This score will be useful in the perioperative management of high risk patients as well as in clinical trial design.
BackgroundGrip strength, walking speed, chair rising and standing balance time are objective measures of physical capability that characterise current health and predict survival in older populations. Socioeconomic position (SEP) in childhood may influence the peak level of physical capability achieved in early adulthood, thereby affecting levels in later adulthood. We have undertaken a systematic review with meta-analyses to test the hypothesis that adverse childhood SEP is associated with lower levels of objectively measured physical capability in adulthood.Methods and FindingsRelevant studies published by May 2010 were identified through literature searches using EMBASE and MEDLINE. Unpublished results were obtained from study investigators. Results were provided by all study investigators in a standard format and pooled using random-effects meta-analyses. 19 studies were included in the review. Total sample sizes in meta-analyses ranged from N = 17,215 for chair rise time to N = 1,061,855 for grip strength. Although heterogeneity was detected, there was consistent evidence in age adjusted models that lower childhood SEP was associated with modest reductions in physical capability levels in adulthood: comparing the lowest with the highest childhood SEP there was a reduction in grip strength of 0.13 standard deviations (95% CI: 0.06, 0.21), a reduction in mean walking speed of 0.07 m/s (0.05, 0.10), an increase in mean chair rise time of 6% (4%, 8%) and an odds ratio of an inability to balance for 5s of 1.26 (1.02, 1.55). Adjustment for the potential mediating factors, adult SEP and body size attenuated associations greatly. However, despite this attenuation, for walking speed and chair rise time, there was still evidence of moderate associations.ConclusionsPolicies targeting socioeconomic inequalities in childhood may have additional benefits in promoting the maintenance of independence in later life.
BackgroundIt is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment.ObjectivesTo develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness.DesignMulticentre, prospective diagnostic cohort study.Setting and participantsChildren < 5 years old presenting to primary care with an acute illness and/or new urinary symptoms.MethodsOne hundred and seven clinical characteristics (index tests) were recorded from the child’s past medical history, symptoms, physical examination signs and urine dipstick test. Prior to dipstick results clinician opinion of UTI likelihood (‘clinical diagnosis’) and urine sampling and treatment intentions (‘clinical judgement’) were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 105colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the ‘clinician diagnosis’ AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with ‘clinical judgement’.ResultsA total of 7163 children were recruited, of whom 50% were female and 49% were < 2 years old. Culture results were available for 5017 (70%); 2740 children provided clean-catch samples, 94% of whom were ≥ 2 years old, with 2.2% meeting the UTI definition. Among these, ‘clinical diagnosis’ correctly identified 46.6% of positive cultures, with 94.7% specificity and an AUROC of 0.77 (95% CI 0.71 to 0.83). Four symptoms, three signs and three dipstick results were independently associated with UTI with an AUROC (95% CI; bootstrap-validated AUROC) of 0.89 (0.85 to 0.95; validated 0.88) for symptoms and signs, increasing to 0.93 (0.90 to 0.97; validated 0.90) with dipstick results. Nappy pad samples were provided from the other 2277 children, of whom 82% were < 2 years old and 1.3% met the UTI definition. ‘Clinical diagnosis’ correctly identified 13.3% positive cultures, with 98.5% specificity and an AUROC of 0.63 (95% CI 0.53 to 0.72). Four symptoms and two dipstick results were independently associated with UTI, with an AUROC of 0.81 (0.72 to 0.90; validated 0.78) for symptoms, increasing to 0.87 (0.80 to 0.94; validated 0.82) with the dipstick findings. A high specificity threshold for the clean-catch model was more accurate and less costly than, and as effective as, clinical judgement. The additional diagnostic utility of dipstick testing was offset by its costs. The cost-effectiveness of the nappy pad model was not clear-cut.ConclusionsClinicians should prioritise the use of clean-catch sampling as symptoms and signs can cost-effectively improve the identification of UTI in young children where clean catch is possible. Dipstick testing can improve targeting of antibiotic treatment, but at a higher cost than waiting for a laboratory result. Future research is needed to distinguish pathogens from contaminants, assess the impact of the clean-catch algorithm on patient outcomes, and the cost-effectiveness of presumptive versus dipstick versus laboratory-guided antibiotic treatment.FundingThe National Institute for Health Research Health Technology Assessment programme.
Aims To report risk factors for visual acuity (VA) improvement and harm following cataract surgery using electronically collected multi-centre data conforming to the Cataract National Dataset (CND). Methods Routinely collected anonymised data were remotely extracted from the electronic patient record systems of 12 participating NHS Trusts undertaking cataract surgery. Following data checks and cleaning, analyses were performed to identify risk indicators for: (1) a good acuity outcome (VA 6/12 or better), (2) the pre-to postoperative change in VA, and (3) VA loss (doubling or worse of the visual angle). Results In all, 406 surgeons from 12 NHS Trusts submitted data on 55 567 cataract operations. Preoperative VA was known for 55 528 (99.9%) and postoperative VA outcome for 40 758 (73.3%) operations. Important adverse preoperative risk indicators found in at least 2 of the 3 analyses included older age (3), short axial length (3), any ocular comorbidity (3), age-related macular degeneration (2), diabetic retinopathy (3), amblyopia (2), corneal pathology (2), previous vitrectomy (2), and posterior capsule rupture (PCR) during surgery (3). PCR was the only potentially modifiable adverse risk indicator and was powerfully associated with VA loss (OR ¼ 5.74). Conclusion Routinely collected electronic data conforming to the CND provide sufficient detail for identification and quantification of preoperative risk indicators for VA outcomes of cataract surgery. The majority of risk indicators are intrinsic to the patient or their eye, with a notable exception being PCR during surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.