Conventional clinical neuroimaging is insensitive to axonal injury in traumatic brain injury (TBI). Immunocytochemical staining reveals changes to axonal morphology within hours, suggesting potential for diffusion-weighted magnetic resonance (MR) in early diagnosis and management of TBI. Diffusion tensor imaging (DTI) characterizes the three-dimensional (3D) distribution of water diffusion, which is highly anisotropic in white matter fibers owing to axonal length. Recently, DTI has been used to investigate traumatic axonal injury (TAI), emphasizing regional analysis in more severe TBI. In the current study, we hypothesized that a global white matter (WM) analysis of DTI data would be sensitive to TAI across a spectrum of TBI severity and injury to scan interval. To investigate this, we compared WM-only histograms of a scalar, fractional anisotropy (FA), between 20 heterogeneous TBI patients recruited from Detroit Medical Center, including six mild TBI (GCS 13-15), and 14 healthy age-matched controls. FA histogram parameters were correlated with admission GCS and posttraumatic amnesia (PTA). In all cases, including mild TBI, patients' FA histograms were globally decreased compared with control histograms. The shape of the TBI histograms also differed from controls, being more peaked and skewed. The mean FA, kurtosis and skewness were highly correlated suggesting a common mechanism. FA histogram properties also correlated with injury severity indexed by GCS and PTA, with mean FA being the best predictor and duration of PTA (r = 0.64) being superior to GCS (r = 0.47). Therefore, in this heterogeneous sample, the FA mean accounted for 40% of the variance in PTA. Increased diffusion in the short axis dimension, likely reflecting dysmyelination and swelling of axons, accounted for most of the FA decrease. FA is globally deceased in WM, including mild TBI, possibly reflecting widespread involvement. FA changes appear to be correlated with injury severity suggesting a role in early diagnosis and prognosis of TBI.
Machine learning models are central to people's lives and impact society in ways as fundamental as determining how people access information. The gravity of these models imparts a responsibility to model developers to ensure that they are treating users in a fair and equitable manner. Before deploying a model into production, it is crucial to examine the extent to which its predictions demonstrate biases. This paper deals with the detection of bias exhibited by a machine learning model through statistical hypothesis testing. We propose a permutation testing methodology that performs a hypothesis test that a model is fair across two groups with respect to any given metric. There are increasingly many notions of fairness that can speak to different aspects of model fairness. Our aim is to provide a flexible framework that empowers practitioners to identify significant biases in any metric they wish to study. We provide a formal testing mechanism as well as extensive experiments to show how this method works in practice.
Many internet applications are powered by machine learned models, which are usually trained on labeled datasets obtained through user feedback signals or human judgments. Since societal biases may be present in the generation of such datasets, it is possible for the trained models to be biased, thereby resulting in potential discrimination and harms for disadvantaged groups. Motivated by the need to understand and address algorithmic bias in webscale ML systems and the limitations of existing fairness toolkits, we present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems. We highlight the key requirements in deployed settings, and present the design of our fairness measurement system. We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn. Finally, we provide open problems based on practical experience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.