For speech coding, a vocal tract is modeled using Linear Predictive Coding (LPC) coefficients. The LPC coefficients are typically transformed to Line Spectral Frequency (LSF) parameters which are advantageous for linear interpolation and quantization. If multidimensional LSF data are quantized directly using VectorQuantization (VQ), high rate-distortion performance can be obtained by fully utilizing intra-frame correlation. In practice, since this direct VQ system cannot be used due to high computational complexity and memory requirement, Split VQ (SVQ) is used where a multidimensional vector is split into multilple sub-vectors for quantization. The LSF parameters also have high inter-frame correlation, and thus Predictive SVQ (PSVQ) is utilized. PSVQ provides better rate-distortion performance than SVQ. In this paper, to implement the optimal predictors in PSVQ for voice storage devices, we propose Multi-Frame AR-model based SVQ (MF-AR-SVQ) that considers the inter-frame correlations with multiple previous frames. Compared with conventional PSVQ, the proposed MF-AR-SVQ provides 1 bit gain in terms of spectral distortion without significant increase in complexity and memory requirement.