There is a widespread view that in order to be rational we must mostly know what we believe. In the probabilistic tradition this is defended by arguments that a person who failed to have this knowledge would be vulnerable to sure loss, or probabilistically incoherent. I argue that even gross failure to know one's own beliefs need not expose one to sure loss, and does not if we follow a generalization of the standard bridge principle between first-order and second-order beliefs. This makes it possible for a subject to use probabilistic decision theory to manage in a rational way cases of potential failure of this self-knowledge, as we find in implicit bias. Through such cases I argue that it is possible for uncertainty about what our beliefs are to be not only rationally permissible but advantageous.Must we have more or less accurate beliefs about our beliefs? Many otherwise diverse thinkers have taken this to be a requirement for rational beings (e.g.tradition that defines rationality by means of the axioms of probability the reason for this view is arguments to the effect that a subject who either failed to be certain that he had a degree of belief he did have, or failed to have a degree of belief he was certain he had, would be vulnerable to sure loss. That is, there is a set of bets that such a subject would accept as fair and that would give him a loss no matter how the events he bet on turned out. This kind of vulnerability, which the word "incoherence" 2 will refer to here, is according to this tradition what rationality protects us from. I will argue here that contrary to two entrenched arguments for this view sure loss does not follow from failure to have accurate beliefs about our own beliefs. Mistaken belief about one's own belief is a failure, but it is a lack of knowledge and not a failure of rationality in the sense expressed by the probability axioms. If a rational being needs reasonably good knowledge of her own beliefs, we will need more than the constraint of probabilistic coherence to explain why.Usage of this word varies between vulnerability to sure loss and violation of the axioms, two concepts that are largely extensionally equivalent but that can come apart depending on how sure loss is defined. One can violate the axioms and, arguably, not have the relevant vulnerability (Hacking 1967), and here the issue will be whether having not violated the axioms you can be vulnerable to sure loss by not having your higher-and lower-order beliefs in sync.