As an important format of multimedia, music has filled almost everyone's life. Automatic analyzing music is a significant step to satisfy people's need for music retrieval and music recommendation in an effortless way. Thereinto, downbeat tracking has been a fundamental and continuous problem in Music Information Retrieval (MIR) area. Despite significant research efforts, downbeat tracking still remains a challenge. Previous researches either focus on feature engineering (extracting certain features by signal processing, which are semi-automatic solutions); or have some limitations: they can only model music audio recordings within limited time signatures and tempo ranges. Recently, deep learning has surpassed traditional machine learning methods and has become the primary algorithm in feature learning; the combination of traditional and deep learning methods also has made better performance. In this paper, we begin with a background introduction of downbeat tracking problem. Then, we give detailed discussions of the following topics: system architecture, feature extraction, deep neural network algorithms, datasets, and evaluation strategy. In addition, we take a look at the results from the annual benchmark evaluation-Music Information Retrieval Evaluation eXchange (MIREX), as well as the developments in software implementations. Although much has been achieved in the area of automatic downbeat tracking, Jiancheng Lv some problems still remain. We point out these problems and conclude with possible directions and challenges for future research.
Heart sound segmentation (HSS) aims to detect the four stages (first sound, systole, second heart sound and diastole) from a heart cycle in a phonocardiogram (PCG), which is an essential step in automatic auscultation analysis. Traditional HSS methods need to manually extract the features before dealing with HSS tasks. These artificial features highly rely on extraction algorithms, which often result in poor performance due to the different operating environments. In addition, the high-dimension and frequency characteristics of audio also challenge the traditional methods in effectively addressing HSS tasks. This paper presents a novel end-to-end method based on convolutional long short-term memory (CLSTM), which directly uses audio recording as input to address HSS tasks. Particularly, the convolutional layers are designed to extract the meaningful features and perform the downsampling, and the LSTM layers are developed to conduct the sequence recognition. Both components collectively improve the robustness and adaptability in processing the HSS tasks. Furthermore, the proposed CLSTM algorithm is easily extended to other complex heart sound annotation tasks, as it does not need to extract the characteristics of corresponding tasks in advance. In addition, the proposed algorithm can also be regarded as a powerful feature extraction tool, which can be integrated into the existing models for HSS. Experimental results on real-world PCG datasets, through comparisons to peer competitors, demonstrate the outstanding performance of the proposed algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.