The work described here focuses on recognition of the Wall Street Journal (WSJ) pilot database [MI, a new CSR database which supports 5K, 20K, and up to 64K-word CSR tasks. The original LincolnTied-Mixture HMM CSR was implementedusing a timesynchronous beam-pruned search of a static network[l4] which does not extend well to this task because the recognitionnetwork would be too large. Therefore, the recognizer has been converted to a stack decoder-based search strategy [l, 7, 161. This decoder has been shown to function effectively on up to 64K-word recognition of continuous speech. Recognition-time adaptation has also been added to the recognizer. This paper describes the acoustic modeling techniques and the implementation of the stackdecoder used to obtain these results.