Oral reading fluency (ORF) was investigated initially by Deno, Mirkin, and Chiang (1982) with a plethora of studies published since then (Tindal, 2013). As originally designed, ORF was to provide teachers a systematic way to monitor the effects of instruction over time. In the early days, passages were drawn at random from a curriculum series representing a "long range goal" level of difficulty (to be covered within the year) that inherently provided preview and review of material (Fuchs & Deno, 1994). Although the early applications of ORF were oriented to monitoring progress for students with disabilities, it quickly spread to establishing "benchmarks" or norms at various times (seasons) in Grades 1 to 8, beginning with Tindal, Germann, and Deno (1983). Typically, change over time has been reported as words correct per minute (wcpm), using either school days (5) or calendar days (7) to create a weekly metric.Concurrent with the research on ORF, its use in applied settings is becoming nearly ubiquitous with response to intervention (RTI) systems. In part, this is likely due to the emphasis with most RTI approaches on collecting learning data over time to evaluate instruction using some kind decision-making process (Batsche et al., 2005). Typically, RTI includes high-quality, differentiated instruction organized into tiers (levels of intensity) as well as universal screening and progress monitoring to support the decision-making process (Liu, Alonzo, & Tindal, 2011). As a measurement system, ORF appears both technically adequate in its relation to other important indicators (e.g., statewide tests) and is sensitive to change within the year. Shapiro, Zigmond, Wallace, and Marston (2011) noted that, in RTI, "progress monitoring plays such a key role, underlying many of the decisions made through the model" (p. xiv).We position this research in an applied setting where ORF is being used by hundreds of teachers to monitor their students' progress in Tier 2 settings (having been found to be significantly below peers on fall benchmark measures). An important study for our research is that of Mellard, McKnight, and Woods (2009) who concluded that schools were screening students in various ways, using norms or percentage of population as cut points for risk assessment, placing students into instructional tiers (in varying proportions), and monitoring progress in Tiers 2 and 3. Through their in-depth interviews, they noted that "first, the importance of good record keeping systems was a recurring theme" (p. 192). For us, good record keeping includes sufficient progress monitoring to inform the decision-making process.