Spatial interpolation of head-related transfer functions (HRTFs) is necessary to reconstruct the spatially continuous HRTF function and therefore virtual sound sources in virtual auditory displays. On the basis of the artificial neural network with radial basis functions, this paper proposes a nonlinear interpolation method for HRTFs. Performance of the proposed interpolation method was validated using a highresolution HRTF database with the directional resolution of 1°. Computational results indicate that the mean signal distortion ratio is 50.8 dB, 41.7 dB, 36.1 dB, 32.1 dB, 28.8 dB, 20.4 dB, and 16.9 dB for azimuthal intervals of 2°, 4°, 6°, 8°, 10°, 20°, and 30°, respectively. Moreover, the interpolation performance is better for the ipsilateral HRTFs compared with the contralateral HRTFs.