The increasing computational power of embedded CPU's motivates the fixed-point implementation of highly accurate largevocabulary continuous-speech (LVCSR) algorithms, to achieve the same performance on the device as on the server. We report on methods for the fixed-point implementation of the frame-synchronous beam-search Viterbi decoder, N-grams language models, and HMM likelihood computation. This fixedpoint recognizer is as accurate as our best floating-point recognizer in several LVCSR experiments on the DARPA Switchboard task and on an AT&T proprietary task, with different types of acoustic front-ends and HMM's. We also present experiments on the DARPA Resource Management task using the StrongARM-1100 206 MHz CPU, where the fixed-point implementation enables real-time performance: the floatingpoint recognizer, with floating-point software emulation, is 50 times slower for the same accuracy.