2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018
DOI: 10.1109/icassp.2018.8461293
|View full text |Cite
|
Sign up to set email alerts
|

Orthogonality-Regularized Masked NMF for Learning on Weakly Labeled Audio Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Some approaches propagate the bag-level label to all instances and train against these directly [51,56], which can introduce instance-level label noise. Other approaches are based on source separation, and obtain dynamic labels by post-processing the separated sources (e.g., by computing the frame-wise energy of each separated source) [57,58].…”
Section: Sound Event Detection Using Weakly Labeled Datamentioning
confidence: 99%
“…Some approaches propagate the bag-level label to all instances and train against these directly [51,56], which can introduce instance-level label noise. Other approaches are based on source separation, and obtain dynamic labels by post-processing the separated sources (e.g., by computing the frame-wise energy of each separated source) [57,58].…”
Section: Sound Event Detection Using Weakly Labeled Datamentioning
confidence: 99%
“…For the AED model we use a model from our previous work, where we adapted a standard NMF approach to learning on weakly labeled data [21].…”
Section: Orthogonality-regularized Nmfmentioning
confidence: 99%
“…In [9], class activity penalties and structured dropout are used for score-informed source separation by applying constraints to the latent units of an autoencoder (AE). In [10], an NMF method is proposed that is trained on weakly labeled data. Another work that utilizes class information is [11] where a conditional variational autoencoder (VAE) is trained as a universal generative model to represent known source classes.…”
Section: Introductionmentioning
confidence: 99%