2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9533407
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting the Onsets and Frames Model with Additive Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 6 publications
0
11
0
Order By: Relevance
“…As a major advancement, U-net models [35] were shown to improve performance of AMT [3], [20]- [23], [36] and other MIR tasks [37], [38]. More recently, the inclusion of self-attention components into U-nets [3], [21] and other models [39] was applied successfully to MPE. Most of our architectures rely on the U-net paradigm, enhanced with self-attention components as in [3] and other extensions.…”
Section: A Previous Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…As a major advancement, U-net models [35] were shown to improve performance of AMT [3], [20]- [23], [36] and other MIR tasks [37], [38]. More recently, the inclusion of self-attention components into U-nets [3], [21] and other models [39] was applied successfully to MPE. Most of our architectures rely on the U-net paradigm, enhanced with self-attention components as in [3] and other extensions.…”
Section: A Previous Modelsmentioning
confidence: 99%
“…Multi-task strategies. While the models mentioned above are usually trained on the MPE task in isolation, major success was reported when proceeding to multi-task settings, most prominently the onset-and-frames approach [6], [7], [39]. However, this strategy is tailored to the piano-solo case (where the percussive keystrokes help to successfully track onsets) and therefore, is not generally applicable to other instruments with less prominent onsets.…”
Section: A Previous Modelsmentioning
confidence: 99%
“…Following existing literature [12,13,18,19,23,24], we report the frame-wise, note-wise, and note-with-offset-wise metrics to evaluate our model performance comprehensively. For note-wise metric, we use a onset tolerance of 50ms; for note-with-offset-wise metric, we use an offset tolerance of 50ms or 20% of the note duration, whichever is larger [2].…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…For note-wise metric, we use a onset tolerance of 50ms; for note-with-offset-wise metric, we use an offset tolerance of 50ms or 20% of the note duration, whichever is larger [2]. Readers are referred to Cheuk et al [13] which explains the differences between these metrics in detail in their Section IV-C. In our experiments, we use the implementations from mir_eval 1 to calculate and report the above-mentioned metrics.…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 1 more Smart Citation