Abstract. Most work related to query extension (QE) adopted the assumption that terms in a document are independent, and multinomial distribution is widely used for feedback documents modeling in lots of QE models. We argue that in QE methods, the relevance model (RM) which generates the feedback documents should be modeled with a more suitable distribution, in order to naturally handle the term associations in feedback document. Recently, Document Boltzmann Machine (DBM) was proposed for document modeling in information retrieval, and this model can relax the independence assumption, i.e., can capture the term dependency naturally. It has been shown that DBM can be seen as the generalization of traditional unigram language model and achieves better ad hoc retrieval performance. In this paper, we replace the multinomial distribution in the traditional unigram RM method with DBM, while leaving the main QE framework unchanged to keep the model uncomplicated. Thus, the relevance model is estimated by the DBM trained on feedback documents, called relevance DBM (rDBM). The extended query is generated from the learnt rDBM, and we give the final extended query likelihood according to the parameter values in rDBM. One difficulty in learning rDBM is the problem of data sparseness, which could lead to over fitted rDBMs and harm the retrieval performance. To solve this problem, we adopt Confident Information First (CIF)as model selection principle to reduce the complexity of rDBM, which lead our proposed query extension method more efficient and practical. Experiments on several standard TREC collections show the effectiveness of our QE method with DBM and model selection method.