A question that qualitative researchers are asked frequently is how they justify generalizing their finding to populations. In this article, I argue that this question results from a misunderstanding of generalization that conflates of the logic and mechanics of statistical generalization with that inherent in the process typically used in qualitative methods, which does not rely on probability sampling procedures. To clarify the differences in these processes, I propose the concept of qualitative generalization. It is built upon the work of scholars who have identified the logic of qualitative research as rooted in a cycle of inferential processes that identify forms of stability and variation in their data. Instead of using probability sampling to capture variability in samples which reflects that in a population, qualitative researchers use this cycle to develop a map of variation in their data, which reflects the practice and experience of the phenomena under study-a logic describing generalization to the phenomenon, not the population. The initial application of this self-correcting cycle of inferences underpins the later stage of transferability of findings by readers. The framework of methodological integrity is considered to explain how research goals, epistemological perspectives, and study characteristics (e.g., diversity) influence the identification of variation and, ultimately, qualitative generalization. This framework orients researchers to identify variation that can increase fidelity and utility, supporting qualitative generalization. The formulation of qualitative generalization proposed is seen as congruent with existing practices across a variety of qualitative traditions and with reasoning intrinsic to qualitative methods.