The paper investigates retraining options and the performance of pre-trained Convolutional Neural Networks (CNNs) for sound classification. CNNs were initially designed for image classification and recognition, and, at a second phase, they extended towards sound classification. Transfer learning is a promising paradigm, retraining already trained networks upon different datasets. We selected three ‘Image’- and two ‘Sound’-trained CNNs, namely, GoogLeNet, SqueezeNet, ShuffleNet, VGGish, and YAMNet, and applied transfer learning. We explored the influence of key retraining parameters, including the optimizer, the mini-batch size, the learning rate, and the number of epochs, on the classification accuracy and the processing time needed in terms of sound preprocessing for the preparation of the scalograms and spectrograms as well as CNN training. The UrbanSound8K, ESC-10, and Air Compressor open sound datasets were employed. Using a two-fold criterion based on classification accuracy and time needed, we selected the ‘champion’ transfer-learning parameter combinations, discussed the consistency of the classification results, and explored possible benefits from fusing the classification estimations. The Sound CNNs achieved better classification accuracy, reaching an average of 96.4% for UrbanSound8K, 91.25% for ESC-10, and 100% for the Air Compressor dataset.
Machine learning algorithms for sound classification can be supported by multiple temporal, spectral, and perceptual features extracted from the sound signal. The number of features affects the classification accuracy but also the computational resources requested, so the number of features has to be carefully selected. In this work, we propose a methodology for feature selection based on the principal component analysis. The case study has been the classification of classroom sounds during face‐to‐face module delivery and six sound types have been defined. The proposed method is applied upon a set of 143 sound features to produce feature ranking. The ranking results are compared with those provided by the Relief‐F. Then the selected features are used by five classification algorithms, Linear Discriminant Analysis (LDA), Quadratic Support Vector Machine (QSVM), k Nearest Neighbors, Boosted Trees, and Random Forest. The algorithms are executed with increasing number of features, from 1 to 143, considering both feature rankings, creating 1430 models. The performance of the classification algorithms increases rapidly with the number of features with LDA, QSVM, and Boosted Trees outperforming other methods and surpassing the accuracy ratio of 90% with 25 features.
The objective of this work is to identify unobtrusive methodologies that allow the monitoring and understanding of the educational environment, during face to face activities, through capturing and processing of sound and video signals. It is a survey on application and techniques that exploit these two signals (sound and video) retrieved in classrooms, offices and other spaces. We categorize such applications based upon the high level characteristics extracted from the analysis of the low level features of the sound and video signals. Through the overview of these technologies, we attempt to achieve a degree of understanding the human behavior in a smart classroom, on behalf of the students and the teacher. Additionally, we illustrate open-research points for further investigation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.