Human decision making is accompanied by a sense of confidence. According to Bayesian decision theory, confidence reflects the learned probability of making a correct response, given available data (e.g., accumulated stimulus evidence and response time). Although optimal, independently learning these probabilities for all possible combinations of data is computationally intractable. Here, we describe a novel model of confidence implementing a low-dimensional approximation of this optimal yet intractable solution. Using a low number of free parameters, this model allows efficient estimation of confidence, while at the same time accounting for idiosyncrasies, different kinds of biases and deviation from the optimal probability correct. Our model dissociates confidence biases resulting from individuals' estimate of the reliability of evidence (captured by parameter α), from confidence biases resulting from general stimulus-independent under- and overconfidence (captured by parameter β). We provide empirical evidence that this model accurately fits both choice data (accuracy, response time) and trial-by-trial confidence ratings simultaneously. Finally, we test and empirically validate two novel predictions of the model, namely that (1) changes in confidence can be independent of performance and (2) selectively manipulating each parameter of our model leads to distinct patterns of confidence judgments. As the first tractable and flexible account of the computation of confidence, our model provides concrete tools to construct computationally more plausible models, and offers a clear framework to interpret and further resolve different forms of confidence biases.