The learning vector quantization (LVQ) algorithm is widely used in image compression because of its intuitively clear learning process and simple implementation. However, LVQ strongly depends on the initialization of the codebook and often converges to local optimal results. To address the issues, a new two-step LVQ (TsLVQ) algorithm is proposed in the paper. TsLVQ uses a correcting learning stage after LVQ to move the synaptic weight vector away from the incorrectly clustered training vector and towards the correctly clustered training vector. Experimental results show that TsLVQ outperforms kernel-based LVQ (KLVQ) and LVQ in terms of peak signal-to-noise ratio.