Despite the widespread and rapidly growing popularity of Big Data, researchers have yet to agree on what the concept entails, what tools are still needed to best interrogate these data, whether or not Big Data's emergence represents a new academic field or simply a set of tools, and how much confidence we can place on results derived from Big Data. Despite these ambiguities, most would agree that Big Data and the methods for analyzing it represent a remarkable potential for advancing social science knowledge. In my Presidential address to the Southern Demographic Association, Iargue that demographers have long collected and analyzed Big Data in a small way, by parsing out the points of information that we can manipulate with familiar models and restricting analyses to what typical computing systems can handle or restricted-access data disseminators will allow. In order to better interrogate the data we already have, we need to change the culture of demography to treat demographic microdata as Big. This includes shaping the definition of Big Data, changing how we conceptualize models, and re-evaluating how we silo confidential data. However, the data typically referred to as Big Data represent what Ruggles (2014) refers to as Big "shallow" Data. Indeed, the types of data frequently categorized as Big are exhaust data-data accidentally created for purposes not related to research. In fact, these types of data are so common that some US government analysts actually define Big Data as "non-sampled data, characterized by the creation of databases from electronic sources whose primary purpose is something other than statistical inference" (Horrigan 2015). The fact that these data have questionable generalizability is problematic for those seeking to make true statements about people's conditions. Because of this limitation, demographers at the United Nations have recently called for a new data ecosystem which goes beyond exhaust data and encompasses the types of population-generalizable data that are the bases of good demographic analyses (HLG-PCCB 2016).
I agree with Ruggles that demographers have long collected and analyzed potentially Big deepData-large datasets comprised of population-generalizable data. Certainly, the entirety of coded US Census data would be one example; fifty years of the Panel Study of Income Dynamics would be another example. Currently, however, our methodological, statistical, and computer training as demographers have left us ill-prepared to tackle the types of problems that can be addressed with these kinds of Big Data. That is, we have Big Data, but we treat it in a small fashion. Even if we knew how to pull four decades of US Census data into a system with a large enough memory or