Motor vehicle crashes remain the leading cause of death and injury for people aged 5 – 34, accounting annually for over 3000 deaths, and 100 times as many injuries. It is well established that distracted driving, and cell phone use while driving in particular, pose significant crash risk to drivers. Research has demonstrated that drivers are well aware of this danger but over 90% of drivers report using a cell phone while driving. Given the likely role that social influence plays in how people use cell phones while driving surprisingly little research has been conducted investigating to whom drivers are talking or texting. We report the results of a national survey to determine who drivers are most likely to call or text when behind the wheel and compared these results with general cell phone calling and texting patterns as well as previous findings on the prevalence of calling and texting while driving. The results suggest that social distance is a key factor in cell phone use while driving: Teens are more likely to talk with parents, and adults are more likely to talk with spouses than general calling patterns would suggest. We discuss whether the purpose of calls made while driving, such as coordination, could help explain these patterns. We propose next steps for further examining the role social relationships play in cell phone use while driving to potentially reduce teen driver cell phone use by lowering the number of calls from parents.
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure of creativity and divergent thinking that encourages a wide range of potential responses. Analyses demonstrated that the LSA scores were highly correlated with conventional Consequence Test scores, reaching a correlation of .94 with human raters and were moderately correlated with performance criteria. This approach to scoring short answer constructed responses solves many practical problems including the time for humans to rate open-ended responses and the difficulty in achieving reliable scoring.
The effectiveness of emerging technology in helping to develop the tacit or experience-based knowledge needed for effective leadership performance was demonstrated in an on-line environment for discussion and training. One hundred and twenty-seven military students participated in three 20-minute discussions in one of three learning environments: standard classroom; standard on-line discussion; and discussion assisted by semantic technology. Consistent with expectations, semantic technology-supported learning resulted in greater discussion participation and training performance with discussion participation mediating the relationship between the learning environment and training satisfaction. An interaction between learning goal orientation (LGO) and learning environment on tacit knowledge performance showed that face-to-face conditions may help those with low LGO.
PurposeDeveloping medical students' clinical reasoning requires a structured longitudinal curriculum with frequent targeted assessment and feedback. Performance-based assessments, which have the strongest validity evidence, are currently not feasible for this purpose because they are time-intensive to score. This study explored the potential of using machine learning technologies to score one such assessment-the diagnostic justification essay.
MethodFrom May to September 2018, machine scoring algorithms were trained to score a sample of 700 diagnostic justification essays written by 414 third-year medical students from the Southern Illinois University School of Medicine classes of 2012-2017. The algorithms applied semantically based natural language processing metrics (e.g., coherence, readability) to assess essay quality on 4 criteria (differential diagnosis, recognition and use of findings, workup, and thought process); the scores for these criteria were summed to create overall scores. Three sources of validity evidence (response process, internal structure, and association with other variables) were examined.
ResultsMachine scores correlated more strongly with faculty ratings than faculty ratings did with each other (machine: .28-.53, faculty: .13-.33) and were less case-specific. Machine scores and faculty ratings were similarly correlated with medical knowledge, clinical cognition, and prior diagnostic justification. Machine scores were more strongly associated with clinical communication than were faculty ratings (.43 vs .31).
ConclusionsMachine learning technologies may be useful for assessing medical students' long-form written clinical reasoning. Semantically based machine scoring may capture the communicative aspects of clinical reasoning better than faculty ratings, offering the potential for automated assessment that generalizes to the workplace. These results underscore the potential of machine scoring to capture an aspect of clinical reasoning performance that is difficult to assess with traditional analytic scoring methods. Additional research should investigate machine scoring generalizability and examine its acceptability to trainees and educators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.