A special type of ordinal scale comprising a number of intervals of known numeric ranges can be used when estimating severity of a plant disease. The interval ranges are most often based on the percent area with symptoms [e.g. the Horsfall-Barratt (H-B) scale]. Studies in plant pathology and plant breeding often use this type of ordinal scale. The disease severity is estimated by a rater as a value on the scale and has been used to determine a disease severity index (DSI) on a percentage basis, where DSI (%) = [sum (class frequency × score of rating class)]/[(total number of plants) × (maximal disease index)] × 100. However, very few studies have investigated the effects of different scales on accuracy of the DSI. Therefore, the objectives of this study were to investigate the process of calculating a DSI on a percentage basis from ordinal scale data, and to use simulation approaches to explore the effect of using different methods for calculation of the interval range and the nature of the ordinal scales used on the DSI estimates (%). We found that the DSI is particularly prone to overestimation when using the above formula if the midpoint values of the rating class are not considered. Moreover, the results of the simulation studies show that, if rater estimates are unbiased, compared with other methods tested in this study, the most accurate method for estimation of a DSI is to use the midpoint of the severity range for each class with an amended 10% ordinal scale (an ordinal scale based on a 10% linear scale emphasising severities ≤50% disease, with additional grades at low severities). As for biased conditions, the accuracy for calculating DSI estimates (%) will depend mainly on the degree and direction of the rater bias relative to the actual mean value.