This paper presents an abstract definition of partial inconsistency and one operator used to remove it: normalization. When there is partial inconsistency in the combination of two pieces of information, this partial inconsistency is propagated to all the information in the system thereby degrading it. To avoid this effect, it is necessary to apply normalization. Four different formalisms are studied as particular cases of the axiomatic framework presented in this paper: probability theory, infinitesimal probabilities, possibility theory, and symbolic evidence theory. It is shown how, in one of these theories (probability), normalization is not an important problem: a property which is verified in this case gives rise to the equivalence of all the different normalization strategies. Things are very different for the other three theories: there are a number of different normalization procedures. The main objective of this paper will be to determine conditions based on general principles indicating how and when the normalization operator should be applied.