2020
DOI: 10.3390/s20164368
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Noisy Sound-Event Mixture Classification Using Adaptive-Sparse Complex-Valued Matrix Factorization and OvsO SVM

Abstract: This paper proposes a solution for events classification from a sole noisy mixture that consist of two major steps: a sound-event separation and a sound-event classification. The traditional complex nonnegative matrix factorization (CMF) is extended by cooperation with the optimal adaptive L1 sparsity to decompose a noisy single-channel mixture. The proposed adaptive L1 sparsity CMF algorithm encodes the spectra pattern and estimates the phase of the original signals in time-frequency representation. Their fea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
10

Relationship

3
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…Finally, the support vector regression is an extension of the support vector machine for solving regression problems. The objective function of SVR is to minimize the coefficients by using the l 2 -norm of the coefficient vector [ 29 , 30 ] instead of the squared error, as expressed in Equation (12). The constraint called the maximum error ( ) is represented by the absolute error in Equation (13).…”
Section: Methodsmentioning
confidence: 99%
“…Finally, the support vector regression is an extension of the support vector machine for solving regression problems. The objective function of SVR is to minimize the coefficients by using the l 2 -norm of the coefficient vector [ 29 , 30 ] instead of the squared error, as expressed in Equation (12). The constraint called the maximum error ( ) is represented by the absolute error in Equation (13).…”
Section: Methodsmentioning
confidence: 99%
“…Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems inspired by the biological neural networks of the human brain processes information [ 33 , 34 , 35 , 36 , 37 ]. ANN consists of a first input layer, then hidden layers, and lastly an output layer [ 38 ].…”
Section: Methodsmentioning
confidence: 99%
“…This effect can be used efficiently for pattern classification purposes. Examples of sound classification events can be found in [21], where different sources are separated and classified. Considering the present problem of acoustic localization, a supervised approach is considered, where the network will be trained taking into account observations of an acoustic model, and the corresponding coordinates in a predefined search-space.…”
Section: Introductionmentioning
confidence: 99%