The many tools that social and behavioral scientists use to gather data from their fellow humans have, in most cases, been honed on a rarefied subset of humanity: highly educated participants with unique capacities, experiences, motivations, and social expectations. Through this honing process, researchers have developed protocols that extract information from these participants with great efficiency. However, as researchers reach out to broader populations, it is unclear whether these highly refined protocols are robust to cultural differences in skills, motivations, and expected modes of social interaction. In this paper, we illustrate the kinds of mismatches that can arise when using these highly refined protocols with nontypical populations by describing our experience translating an apparently simple social discounting protocol to work in rural Bangladesh. Multiple rounds of piloting and revision revealed a number of tacit assumptions about how participants should perceive, understand, and respond to key elements of the protocol. These included facility with numbers, letters, abstract number lines, and 2D geometric shapes, and the treatment of decisions as a series of isolated events. Through onthe-ground observation and a collaborative refinement process, we developed a protocol that worked both in Bangladesh and among US college students. More systematic study of the process of adapting common protocols to new contexts will provide valuable information about the range of skills, motivations, and modes of interaction that participants bring to studies as we develop a more diverse and inclusive social and behavioral science. generalizability | diversity | cross-cultural | social discounting | Bangladesh I n 1932, the psychologist Rensis Likert (1) published his dissertation on a novel method for measuring attitudes. After giving university students printed statements about race relations, Likert asked them to check one of five options (strongly approve, approve, undecided, disapprove, and strongly disapprove) indicating how much they endorsed each of these statements. Likert then assigned numbers to these levels of approval and took an average across all statements. The simplicity of both the response format and construction of the scale soon spurred researchers to adopt elements of the technique to assess not only attitudes (Likert's original interest) but also subjective judgments along many dimensions, including likelihood, desirability, difficulty, and happiness (2). Today, after decades of testing and refinement on generations of participants, Likert's simple format has become a reliable mainstay of social and behavioral research.Given its ubiquity in the social and behavioral sciences, one might guess that a five-or seven-item Likert format is a natural way of asking humans about their subjective judgments. However, in the rare cases when researchers have described their experience using Likert items outside of formally educated populations, they have been met with mixed success (3,4). It turns out tha...