A new method of extracting acoustic features based on auditory spike code is proposed. An auditory spike code represents the acoustic activities created by the signal, similar to sound encoding of the human auditory system. In the proposed method, an auditory spike code of the signal is computed using a 64-band Gammatone filterbank as the kernel functions. Then, for each spectral band, the sum and non-zero counts of the auditory spike code are determined, and the features corresponding to the population and occurrence rate of the acoustic activities for each band are computed. In addition, the distribution of the acoustic activities on a time axis is analysed based on the histogram of time intervals between the adjacent acoustic activities, and the features for expressing temporal properties of the signal are extracted. The reconstruction accuracy of the auditory spike code is also measured as the features. Different from most conventional features obtained by complex statistical modelling or learning, the features by the proposed method can directly show specific acoustic characteristics contained in the signal. These features are applied to a music genre classification, and it is confirmed that they provide a performance comparable to state-of-the-art features.
SUMMARYA method for encoding detection and bit rate classification of AMR-coded speech is proposed. For each texture frame, 184 features consisting of the short-term and long-term temporal statistics of speech parameters are extracted, which can effectively measure the amount of distortion due to AMR. The deep neural network then classifies the bit rate of speech after analyzing the extracted features. It is confirmed that the proposed features provide better performance than the conventional spectral features designed for bit rate classification of coded audio. key words: bit rate, speech codec, AMR, deep neural network, feature vector
In this paper, we propose a new method for on-line genre classification using spectrogram and deep neural network. For on-line processing, the proposed method inputs an audio signal for a time period of 1sec and classifies its genre among 3 genres of speech, music, and effect. In order to provide the generality of processing, it uses the spectrogram as a feature vector, instead of MFCC which has been widely used for audio analysis. We measure the performance of genre classification using real TV audio signals, and confirm that the proposed method has better performance than the conventional method for all genres. In particular, it decreases the rate of classification error between music and effect, which often occurs in the conventional method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.