Most modern technical tasks require high precision measurements. To do this, it is necessary to analyze the causes of errors and take measures to reduce their influence on the accuracy of measurements. The causes of errors are very diverse and cannot always be identified. However, some systematic components of the measurement error can be described and calculated mathematically. In this case, the task of reducing the signal at the output of a measuring device to the form it would have when using an “ideal” device is reduced to calculating a certain linear operator which product to the measured signal allows obtaining the minimum systematic error. In this paper, the application of the reduction method is given by the example of a measuring instrument for the degree of polarization of light radiation which comprises three measuring channels for measuring the intensity of linearly polarized radiation. Each channel is built with the use of three operational amplifiers. The main errors of a measuring channel that can be described and determined are the errors of the operational amplifiers associated with the bias voltages and temperature drift. In real measuring systems there are much larger of such components. However, the use of computer equipment for modeling systems and processes, as well as measurements, removes all restrictions on the possibilities of processing the obtained data in a software way. With the help of computer technology it is possible to reduce the influence of perturbing effects and systematic errors, and also to eliminate gross errors. The random component of an error can be reduced by increasing the number of measurements and carrying out statistical data processing.