In this paper several methods are proposed for reducing the size of a trigram language model (LM), which is often the biggest data structure in a continuous speech recognizer, without aecting its performance. The common factor shared by t h e dierent a p proaches is to select only a subset of the a v ailable trigrams, trying t o i d entify those trigrams that mostly contribute t o t h e performance of the full trigram LM. The proposed selection criteria apply to trigram contexts, both o f l e n gth o n e o r t w o. These criteria rely on information theory concepts, the back-o probabilities estimated by t h e LM, or on a measure of the p h onetic/linguistic uncertainty relative t o a given context. Performance of the reduced trigrams LMs are compared both i n t erms of perplexity a n d recognition accuracy. Results show t h a t all the considered methods perform better than the n aive frequency shifting m ethod. In fact, a 50% size reduction is obtained on a shift-1 trigram LM, at t h e cost of a 5% increase in word error rate. Moreover, the r e d u ced LMs improve b y around 15% the w ord error rate of a bigram LM of the same size.