This paper presents a novel approach to extraction of vocal melodies from accompanied singing recordings. Central to our approach is a model of vocal fundamental frequency (F0) likelihood that integrates acoustic-phonetic knowledge and real-world data. This model consists of a timbral fitness score and a loudness measure of each F0 candidate. Timbral fitness is measured for the partial amplitudes of an F0 candidate, with respect to a small set of vocal timbre examples. This F0-specific measurement of timbral fitness depends on an acoustic-phonetic F0 modification of each timbre example. In the loudness part of the likelihood model, sinusoids are detected, tracked, and pruned to give loudness values that minimize interference from the accompaniment. A final F0 estimate is determined by a prior model of F0 sequence in addition to the likelihood model. Melody extraction is completed by detecting voiced time positions according to the singing voice loudness variations given by the estimated F0 sequence. The numerical parameters involved in our approach were optimized on three development sets from different sources before the system was evaluated on ten test sets separate from these development sets. Controlled experiments show that use of the timbral fitness score accounts for a 13% difference in overall accuracy. half of a year. His research interest includes computational cognitive neuroscience, software engineering, numerical electromagnetics, ultra-wide band wireless system, music signal processing, music information retrieval, intelligent agent applications, and electromagnetic scattering analysis.