Largely due to technological advances, methods for analyzing readability have increased significantly in recent years. While past researchers designed hundreds of formulas to estimate the difficulty of texts for readers, controversy has surrounded their use for decades, with criticism stemming largely from their application in creating new texts as well as their utilization of surface-level indicators as proxies for complex cognitive processes that take place when reading a text. This review focuses on examining developments in the field of readability during the past two decades with the goal of informing both current and future research and providing recommendations for present use. The fields of education, linguistics, cognitive science, psychology, discourse processing, and computer science have all made recent strides in developing new methods for predicting the difficulty of texts for various populations. However, there is a need for further development of these methods if they are to become widely available.