The goal of music highlight extraction, or thumbnailing, is to extract a short consecutive segment of a piece of music that is somehow representative of the whole piece. In a previous work, we introduced an attention-based convolutional recurrent neural network that uses music emotion classification as a surrogate task for music highlight extraction, assuming that the most emotional part of a song usually corresponds to the highlight. This paper extends our previous work in the following two aspects. First, methodology-wise we experiment with a new architecture that does not need any recurrent layers, making the training process faster. Moreover, we compare a late-fusion variant and an early-fusion variant to study which one better exploits the attention mechanism. Second, we conduct and report an extensive set of experiments comparing the proposed attention-based methods to a heuristic energy-based method, a structural repetition-based method, and three other simple feature-based methods, respectively. Due to the lack of public-domain labeled data for highlight extraction, following our previous work we use the RWC-Pop 100-song data set to evaluate how the detected highlights overlap with any chorus sections of the songs. The experiments demonstrate superior effectiveness of our methods over the competing methods. For reproducibility, we share the code and the pre-trained model at https://github. com/remyhuang/pop-music-highlighter/.