Human listeners estimate the spatial direction of a sound source from multiple auditory features and prior information on the sound direction. In this work, we describe a model of directional localization of a broadband and stationary sound source presented in an anechoic environment to a static listener. The model is based on Bayesian inference and it infers the sound direction from the spatial percept built by weighting the sensory evidence with prior beliefs. With the imposed restrictions, the sensory evidence is composed by a simplistic extraction mechanisms for two binaural features evaluating interaural disparities in time and intensity with a more detailed extraction mechanisms of monaural spectral-shape features. Our analysis considered two model variants for monaural features: amplitude spectra and their spectral gradient profiles. In addition, the model accounts for two types of uncertainties: the sensory noise in spatial cues and the response noise of pointing towards the direction. The proposed model variants were fitted to individual performance of human listeners localizing noise bursts. We then tested the predictive power of those models on the effects of localizing with other ears and sounds with rippled spectrum. This evaluation, especially with the deviation from a flat source spectrum, showed a clear preference for the model variant with spectral gradient cues, also in comparison to previously proposed localization models. Hence, the proposed model provides a tool for the evaluation of head-related transfer functions and a solid basis for future extensions towards modeling dynamic listening conditions.