This study seeks to answer a simple question: Is it possible to develop a scoring key for a situational judgment test (SJT) without a pool of subject matter experts (SMEs)? The SJT method is widely studied and used for selection in both occupational and educational settings (Oswald et al., 2004;Lievens & Sackett, 2012). SJTs are typically designed to measure procedural knowledge about how to behave effectively in a particular job (Motowidlo et al., 2006). Along these lines, SJT items are typically scored using rational keys based on ratings gathered from SMEs (Whetzel et al., 2020). According to the U.S. Office of Personnel Management, SMEs are defined as a "person with bona-fide expert knowledge about what it takes to do a particular job" (Office of Personnel Management, 2016). Their judgments determine how each item will be scored. This poses a challenge when creating a new assessment because test developers typically need an item pool at least twice as large as the desired length of the final test (Hinkin, 1998;Murphy & Davidshofer, 1998). Each of these items needs a scoring key in order to be evaluated during an initial round of testing. This means that SMEs would likely spend most of their time rating items that will not be used in the final version of the test.Researchers and test developers face several challenges when using SMEs. The criteria for determining who is considered a SME can be vague or inconsistent across studies. When SJTs are developed for a specific occupation, SMEs have included job incumbents, supervisors, customers, and even novices (Weekley et al., 2006). In many cases, these SMEs work in prestigious occupations (e.g., experienced physicians or medical professionals, Lievens & Sackett, 2012;Patterson et al., 2009) which can make them expensive and difficult to recruit. For more construct-driven SJTs (e.g., applied social skills), academic researchers or graduate