Plant disease quantification, mainly the intensity of disease symptoms on individual units (severity), is the basis for a plethora of research and applied purposes in plant pathology and related disciplines. These include evaluating treatment effect, monitoring epidemics, understanding yield loss, and phenotyping for host resistance. Although sensor technology has been available to measure disease severity using the visible spectrum or other spectral range imaging, it is visual sensing and perception that still dominates, especially in field research. Awareness of the importance of accuracy of visual estimates of severity began in 1892, when Cobb developed a set of diagrams as an aid to guide estimates of rust severity in wheat. Since that time, various approaches, some of them based on principles of psychophysics, have provided a foundation to understand sources of error during the estimation process as well as to develop different disease scales and disease-specific illustrations indicating the diseased area on specimens, similar to that developed by Cobb, and known as standard area diagrams (SADs). Several rater-related (experience, inherent ability, training) and technology-related (instruction, scales, and SADs) characteristics have been shown to affect accuracy. This review provides a historical perspective of visual severity assessment, accounting for concepts, tools, changing paradigms, and methods to maximize accuracy of estimates. A list of best-operating practices in plant disease quantification and future research on the topic is presented based on the current knowledge.