2014
DOI: 10.1145/2668124
|View full text |Cite
|
Sign up to set email alerts
|

The Role of Individual Difference in Judging Expressiveness of Computer-Assisted Music Performances by Experts

Abstract: Computational systems for generating expressive musical performances have been studied for several decades now. These models are generally evaluated by comparing their predictions with actual performances, both from a performance parameter and a subjective point of view, often focusing on very specific aspects of the model. However, little is known about how listeners evaluate the generated performances and what factors influence their judgement and appreciation. In this article, we present two studies, conduc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
5
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 41 publications
(36 reference statements)
0
5
0
Order By: Relevance
“…This might indicate that performances are rated based on some form of perceived distance from an 'ideal' per-formance. Apart from music education, assessment of performances is also an important area of focus for the evaluation of computer generated music performances (Bresin and Friberg, 2013) where researchers have primarily focused on listening studies to understand the effect of musical knowledge and biases on rating performances (De Poli et al, 2014) and the degree to which computer generated performances stack up against those by humans (Schubert et al, 2017).…”
Section: Performance Assessmentmentioning
confidence: 99%
“…This might indicate that performances are rated based on some form of perceived distance from an 'ideal' per-formance. Apart from music education, assessment of performances is also an important area of focus for the evaluation of computer generated music performances (Bresin and Friberg, 2013) where researchers have primarily focused on listening studies to understand the effect of musical knowledge and biases on rating performances (De Poli et al, 2014) and the degree to which computer generated performances stack up against those by humans (Schubert et al, 2017).…”
Section: Performance Assessmentmentioning
confidence: 99%
“…De Poli et al (2014) and Schubert et al (2014a) specifically study how the audience judges entire performances of computational models, by analyzing listeners' scores of several aspects including technical accuracy, emotional content and coherence of the performed style. The listeners were categorized into two different cognitive styles: music systemizers (those who judge a performance in technical and formal terms) and music empathizers (describe a performance in terms of its emotive content).…”
Section: Qualitative Evaluation Via Listening Testsmentioning
confidence: 99%
“…The CaRo 2.0 system won that final stage (see Table 3). The results as well as the comments by the experts are analysed in [41]. The performances played during Rencon 2011 can be listened to in http://smc.dei.unipd.it/advances hci/.…”
Section: Assessmentmentioning
confidence: 99%