To design the
control chart, the in‐control process mean and standard deviation must be estimated from historical samples, negatively affecting the chart's performance. The grand mean of a sample is the well‐established estimator for the process mean. However, regarding the standard deviation, the chart's user has at least five different estimators available in the literature to choose from. In this paper, using intensive simulations, we study and compare the performance of the
chart (under normality assumption and three‐sigma limits) among the most five used standard deviation estimators. The unconditional in‐control run length (RL0) is the most commonly used performance measure of a control chart, and many authors base their comparisons on the expectation of the RL0 or on the mean square error of the estimators. In contrast, we based our comparison on the proportions of the RL0 concentrated at some intervals that are usually considered undesired by the practitioner due to the high incidence of false alarms during the process monitoring (e.g., RL0 between 1 and 200), occurring earlier than expected when compared with known parameter situations. From our results, all the studied standard deviation estimators generate similar performances. However, even with this similarity, we showed that one of the most recommended standard deviation estimators in the literature in control charts with estimated parameters may be the most inappropriate choice based on the undesired RL0 proportions.