Based on our recently-proposed frequency-warping scheme using subglottal resonances (SGRs), this paper addresses two wellknown limitations of conventional vocal-tract length normalization (VTLN): (1) its sub-optimal nature owing to the lack of frequencydependent scaling, and (2) sensitivity to noise. Based on the idea of filter-bank interpolation, a novel approach is proposed to realize the combined effect of VTLN and SGR-based warping (which provides frequency-dependent scaling). Using the Wall Street Journal database and the conventional MFCC front end, SGR warping is shown to be complementary to VTLN in performance. Since SGR warping depends more on the given signal and less on models trained a priori, we argue that SGR warping is less sensitive to noise than VTLN. Through experiments on the AURORA-4 database with power-normalized cepstral coefficients as noise-robust front-end features, we show that SGR warping is better than VTLN, in clean as well as multi-conditional training.