Numerous injury criteria have been developed to predict brain injury using the kinematic response of the head during impact. Each criterion utilizes a metric that is some mathematical combination of the velocity and/or acceleration components of translational and/or rotational head motion. Early metrics were based on linear acceleration of the head, but recent injury criteria have shifted towards rotational-based metrics. Currently, there is no universally accepted metric that is suitable for a diverse range of head impacts. In this study, we assessed the capability of fifteen existing kinematic-based metrics for predicting strain-based brain response using four different automotive impact conditions. Tissue-level strains were obtained through finite element model simulation of 660 head impacts including occupant and pedestrian crash tests, and pendulum head impacts. Correlations between head kinematic metrics and predicted brain strain-based metrics were evaluated. Correlations between brain strain and metrics based on angular velocity were highest among those evaluated, while metrics based on linear acceleration were least correlative. BrIC and RVCI were the kinematic metrics with the highest overall correlation; however, each metric had limitations in certain impact conditions. The results of this study suggest that rotational head kinematics are the most important parameters for brain injury criteria.
Diffuse brain injuries are caused by excessive brain deformation generated primarily by rapid rotational head motion. Metrics that describe the severity of brain injury based on head motion often do not represent the governing physics of brain deformation, rendering them ineffective over a broad range of head impact conditions. This study develops a brain injury metric based on the response of a second-order mechanical system, and relates rotational head kinematics to strain-based brain injury metrics: maximum principal strain (MPS) and cumulative strain damage measure (CSDM). This new metric, universal brain injury criterion (UBrIC), is applicable over a broad range of kinematics encountered in automotive crash and sports. Efficacy of UBrIC was demonstrated by comparing it to MPS and CSDM predicted in 1600 head impacts using two different finite element (FE) brain models. Relative to existing metrics, UBrIC had the highest correlation with the FE models, and performed better in most impact conditions. While UBrIC provides a reliable measurement for brain injury assessment in a broad range of head impact conditions, and can inform helmet and countermeasure design, an injury risk function was not incorporated into its current formulation until validated strain-based risk functions can be developed and verified against human injury data.
Wearable sensors that accurately record head impacts experienced by athletes during play can enable a wide range of potential applications including equipment improvements, player education, and rule changes. One challenge for wearable systems is their ability to discriminate head impacts from recorded spurious signals. This study describes the development and evaluation of a head impact detection system consisting of a mouthguard sensor and machine learning model for distinguishing head impacts from spurious events in football games. Twenty-one collegiate football athletes participating in 11 games during the 2018 and 2019 seasons wore a custom-fit mouthguard instrumented with linear and angular accelerometers to collect kinematic data. Video was reviewed to classify sensor events, collected from instrumented players that sustained head impacts, as head impacts or spurious events. Data from 2018 games were used to train the ML model to classify head impacts using kinematic data features (127 head impacts; 305 non-head impacts). Performance of the mouthguard sensor and ML model were evaluated using an independent test dataset of 3 games from 2019 (58 head impacts; 74 non-head impacts). Based on the test dataset results, the mouthguard sensor alone detected 81.6% of video-confirmed head impacts while the ML classifier provided 98.3% precision and 100% recall, resulting in an overall head impact detection system that achieved 98.3% precision and 81.6% recall.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.