Let a quantity of interest, Y , be modeled in terms of a quantity X and a set of other quantities Z Z Z. Suppose that for Z Z Z there is type B information, by which we mean that it leads directly to a joint state-of-knowledge probability density function (PDF) for that set, without reference to likelihoods. Suppose also that for X there is type A information, which signifies that a likelihood is available. The posterior for X is then obtained by updating its prior with said likelihood by means of Bayes' rule, where the prior encodes whatever type B information there may be available for X. If there is no such information, an appropriate non-informative prior should be used. Once the PDFs for X and Z Z Z have been constructed, they can be propagated through the measurement model to obtain the PDF for Y , either analytically or numerically. But suppose that, at the same time, there is also information of type A, type B or both types together for the quantity Y . By processing such information in the manner described above we obtain another PDF for Y . Which one is right? Should both PDFs be merged somehow? Is there another way of applying Bayes' rule such that a single PDF for Y is obtained that encodes all existing information? In this paper we examine what we believe should be the proper ways of dealing with such a (not uncommon) situation.