Shewhart control charts (ShCCs) are a powerful and technically simple tool for process variability analysis. However, simultaneously, they cannot be fully algorithmized and require deep process knowledge together with additional data analysis. ShCCs are well known, though, and the number of papers is great, as well as standards on ShCCs work in most countries, there are some serious obstacles for their effective application which are not being discussed in either educational or scientific literature. Just these problems are being considered in this paper. We analyzed two sides of standard assumption about data normality. First, we discuss the widely-spread misconception that measurement data are always distributed according Gauss law. Then, it is shown how the deviation from normality may impact the method of ShCCs’ constructing and interpreting. Using a specific process data, we debate on right and wrong ways to build ShCC. Further, the paper describes two new definitions of assignable causes of variation: not changing (I-type) and changing (X-type) the system. At the end, we discuss how the work with ShCCs should be organized effectively. It is outlined that creating and analyzing ShCCs is always a system question of interaction between the process and the person who tries to improve this process.