Victorian Studies and the Digital Humanities (DH) are a natural fit; Victorian texts are in the public domain and thus available for full digitization and subjection to all kinds of applications. This essay focuses on arguably the most controversial: the use of statistical methodologies to analyze literary texts. Often aimed at the “distant reading” practices of Franco Moretti, the pros and cons of these methods have been widely debated among literary scholars, but with little reference to their historical roots. Historical accounts of DH have instead focused on technological developments, citing the massive, computerized concordance of the works of Thomas Aquinas, envisioned in 1949 by Father Roberto Busa and realized by IBM over the next 20 years, as the moment DH was born. Such accounts, as useful as they are, neglect important connections between the computational methodologies of DH and the emergence of modern statistical methodology during the nineteenth century. The advantage of seeing DH as part of the history of statistics is that it helps us understand and evaluate statistically based DH methodologies in their own terms and not simply as methodologies associated with particular neoliberal institutions in the present. This essay argues that understanding the history of statistics as well as examining current practice helps address the criticisms leveled at computational methodologies and the question of compatibility with traditional humanistic methodologies such as close reading. It begins with a brief history of the emergence of modern statistics in the nineteenth century and the impact on Victorian literature and popular culture and then examines recent DH scholarship that utilizes statistical methodologies to analyze Victorian texts.