Schools are increasingly relying on artificial intelligence (AI) technology to assist with a variety of tasks, including identifying students who are at risk for suicide. Specifically, schools use software that is loaded on school-issued devices to monitor students' digital activity (e.g., Google search terms). This software applies proprietary algorithms and alerts school staff when a student's activity suggests suicide risk. These programs have been the subject of controversy, but there is scant research on how they are used and their potential risks and benefits. The main goal of this report is to document how AI-based suicide risk monitoring is being implemented in schools and how it is affecting communities. Using this information, we offer recommendations for schools, caregivers, policymakers, and technology developers.
RAND Education and LaborThis study was undertaken by RAND Education and Labor, a division of the RAND Corporation that conducts research on early childhood through postsecondary education programs, workforce development, and programs and policies affecting workers, entrepreneurship, and financial literacy and decisionmaking.More information about RAND can be found at www.rand.org. Questions about this report should be directed to Lynsay_Ayer