2020
DOI: 10.1371/journal.pone.0243758
|View full text |Cite
|
Sign up to set email alerts
|

Inter-floor noise classification using convolutional neural network

Abstract: In apartment houses, noise between floors can disturb pleasant living environments and cause disputes between neighbors. As a means of resolving disputes caused by inter-floor noise, noises are recorded for 24 hours in a household to verify whether the inter-floor noise exceeded the legal standards. If the noise exceeds the legal standards, the recorded sound is listened to, and it is checked whether the noise comes from neighboring households. When done manually, this process requires time and is costly, and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…It is not clear why data augmentation decreases the accuracy of the model. In fact, results on UrbanSound 8K dataset are lower when compared to techniques such as [28] that make use of data augmentation. Our intuition is that AUCO ResNet is very sensitive to the input audio quality.…”
Section: Resultsmentioning
confidence: 96%
See 4 more Smart Citations
“…It is not clear why data augmentation decreases the accuracy of the model. In fact, results on UrbanSound 8K dataset are lower when compared to techniques such as [28] that make use of data augmentation. Our intuition is that AUCO ResNet is very sensitive to the input audio quality.…”
Section: Resultsmentioning
confidence: 96%
“…NO means not present and YES means the presence of the pre-training or data augmentation technique. Model Accuracy Precision Recall F1 score AUC ROC Pre Trained Data Augmentation AUCO ResNet 0.7783 0.7851 0.7783 0.7709 0.9677 NO NO Chong et al [27] 0.751 NA NA NA NA NO NO Salamon et al [25] 0.75 NA NA NA NA NO NO Giannakopoulos et al [29] 0.731 NA NA NA NA NO NO Salamon et al [23] 0.73 NA NA NA NA NO NO Piczac et al [24] 0.73 NA NA NA NA NO NO Jin et al [26] 0.705 NA NA NA NA NO NO Salamon et al [22] 0.70 NA NA NA NA NO NO Shin et al [28] 0.8514 NA NA NA NA YES YES Shin et al [28] …”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations