This paper is an extended abstract which provides a brief preliminary overview of the 2005 Music Information Retrieval Evaluation eXchange (MIREX 2005). The MIREX organizational framework and infrastructure are outlined. Summary data concerning the 10 evaluation contests is provided. Key issues affecting future MIR evaluations are identified and discussed. The paper concludes with a listing of targets items to be undertaken before MIREX 2006 to ensure the ongoing success of the MIREX framework.
Welcome friends and colleagues to the 2 nd Annual International Symposium on Music Information Retrieval -ISMIR 2001.Following on the heels of last year's groundbreaking inaugural conference, we're convening with colleagues this year at the beautiful campus of Indiana University, Bloomington. We hope the information exchange fostered by this conference will facilitate innovation and enhance collaboration in this dynamic area of research. This year's program is rich in content and variety. We are honored to have David Cope present this year's keynote address. The presentations by our four invited speakers, Roger Dannenberg, Jef Raskin, Youngmoo Kim, and Adam Lindsay provide added depth and breadth to an already dynamic and diverse program.This document includes the texts of the accepted papers along with the extended abstracts of the invited talks and poster presentations. All are also available on the ISMIR 2001Web site at: http:// ismir2001.indiana.edu/ As with last year, we were very encouraged by the number and quality of submissions. Response to our Call for Papers was remarkable. Selecting the twenty papers for presentation (out of 40 submissions) and the eighteen posters for exhibition was no easy task. I'd like to personally thank all those that gave of their time to help review submissions.Unending appreciation and thanks must be extended to the Program Committee: David Bainbridge (Program Chair), Gerry Bernbom, Donald Byrd, Tim Crawford, Jon Dunn, and Michael Fingerhut.Additionally I'd like to thank Dr. Stephen Griffin of the National Science Foundation, for helping us secure the foundational funding that made this symposium possible. Dr. Radha Nandkumar of the National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, must also be thanked for providing financial support to help us augment our student stipend program.Finally, several departments and individuals at our host institution, Indiana University Bloomington, deserve thanks. A great deal of the planning that went into this conference was done by Diane Jung, Charles Rondot, David Taylor, and Les Teach, of the Communications and Planning Office under the Office of the Vice President for Information Technology and CIO, and by Tawana Green and the staff of the IU Conference Bureau. Their assistance is much appreciated. In addition, the Indiana University School of Music generously supplied an extraordinary instrument, a fortepiano, for the mini-recital at the Mathers Museum. Sincerely ABSTRACTWe present a measure of the similarity of the long-term structure of musical pieces. The system deals with raw polyphonic data. Through unsupervised learning, we generate an abstract representation of music -the "texture score". This "texture score" can be matched to other similar scores using a generalized edit distance, in order to assess structural similarity. We notably apply this algorithm to the retrieval of different interpretations of the same song within a music database. MOTIVATIONMotivation for this system is our be...
Mood is an emerging metadata type and access point in music digital libraries (MDL) and online music repositories. In this study, we present a comprehensive investigation of the usefulness of lyrics in music mood classification by evaluating and comparing a wide range of lyric text features including linguistic and text stylistic features. We then combine the best lyric features with features extracted from music audio using two fusion methods. The results show that combining lyrics and audio significantly outperformed systems using audio-only features. In addition, the examination of learning curves shows that the hybrid lyric + audio system needed fewer training samples to achieve the same or better classification accuracies than systems using lyrics or audio singularly. These experiments were conducted on a unique large-scale dataset of 5,296 songs (with both audio and lyrics for each) representing 18 mood categories derived from social tags. The findings push forward the state-of-the-art on lyric sentiment analysis and automatic music mood classification and will help make mood a practical access point in music digital libraries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.