This paper presents an analysis-by-synthesis approach for acoustic model adaptation. Using artificial speech data for speech recognition systems adaptation, has the potential to address the problem of data sparseness, to avoid speech recordings in real conditions and to provide the capability of performing large number of development cycles for Automatic Speech Recognition (ASR) systems in shorter time. The proposed adaptation framework uses unified ASR and synthesis system to produce artificial adaptation speech signals. In order to confirm the usability of the proposed approach, several experiments were performed where the artificial speech data was coded-decoded by different speech and waveform coders and the acoustic model used for synthesis was adapted for each coder.The recognition results show that the proposed method could be used successfully in the process of speech recognition systems performance assessment and improvement, not only for coded speech effects evaluation and adaptation, but also for other environment conditions.