This article presents the state of practice of evaluability assessment (EA) as represented in the published literature from 1986 to 2006. Twenty-three EA studies were located, showing that EA was conducted in a wide variety of programs, disciplines, and settings. Most studies employed document reviews, site visits, and interviews, common methodologies previously recommended in the literature on EA. The use of uncommon methodologies such as the use of standardized instruments and statistical modeling were also found in studies obtained for this review. The most common rationale for conducting EA mentioned in these studies was determining program readiness for impact assessment, program development, and formative evaluation. Outcomes found in these studies include the construction of a program logic model, development of goals and objectives, and modification of program components. The findings suggest that EA is practiced and published more widely than previously known. Recommendations to enhance EA practice are offered.