This paper proposes a novel approach to accomplish the automatic segmentation of singing voice within music signals, based on the difference between the dynamic harmonic content of singing voice and that of musical instrument signals. The obtained results are compared with those of another approach proposed in the literature, considering the same music database. For both techniques, an accuracy rate around 80% is obtained, even using a more rigorous performance measure for our approach only. As an advantage, the new procedure presents lower computational complexity. In addition, we discuss other results obtained by extending the tests over the whole database (upholding the same performance level) and by discriminating the error types (boundaries shifted in time, insertion and deletion of singing segments). The analysis of these errors suggests some alternative ways of reducing them, as for example, to adopt a confidence level based on a minimum harmonic content for the input signals. In this way, considering only signals with confidence level equal to one, the obtained performance is improved to almost 87%.