2003
DOI: 10.1525/mp.2003.21.1.21
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Evaluation: Musical Performance Assessment as a Research Tool

Abstract: Much applied research into musical performance requires a method of quantifying differences and changes between performances; for this purpose, researchers have commonly used performance assessment schemes taken from educational contexts. This article considers some conceptual and practical problems with using judgments of performance quality as a research tool. To illustrate some of these, data are reported from a study in which three experienced evaluators watched performances given by students at the Royal … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
103
0
27

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 89 publications
(135 citation statements)
references
References 18 publications
5
103
0
27
Order By: Relevance
“…Some studies support the relationship between decreased MPA and improved performance quality (e.g., Braden et al in review;Hoffman and Hanrahan 2012;Roland 1994), and others do not (Deen 2000;Mansberger 1988;Osborne et al 2007;Reitman 2001). An additional difficulty involves the disparity between judge's performance ratings which prevents their use as a single compound variable or renders them unusable due to bias (Braden et al in review;Clark and Williamon 2011;Thompson and Williamon 2003). Conversely, this difficulty generates an important and reassuring psycho-educational message: that one's feeling of anxiety or distress in a performance may not necessarily be detected by the audience (Braden et al in review).…”
Section: Conclusion and Directions For Further Researchmentioning
confidence: 99%
“…Some studies support the relationship between decreased MPA and improved performance quality (e.g., Braden et al in review;Hoffman and Hanrahan 2012;Roland 1994), and others do not (Deen 2000;Mansberger 1988;Osborne et al 2007;Reitman 2001). An additional difficulty involves the disparity between judge's performance ratings which prevents their use as a single compound variable or renders them unusable due to bias (Braden et al in review;Clark and Williamon 2011;Thompson and Williamon 2003). Conversely, this difficulty generates an important and reassuring psycho-educational message: that one's feeling of anxiety or distress in a performance may not necessarily be detected by the audience (Braden et al in review).…”
Section: Conclusion and Directions For Further Researchmentioning
confidence: 99%
“…With the general consensus on the importance of sound in the domain of music, as "an art of sound" (40), it follows that experts and key decision makers would privilege auditory-related rating in professional evaluation and assessment, even when such items show insufficient reliability (41)(42)(43)(44)(45). However, despite all that is invested in the auditory domain, low interrater correlations suggest that such basis of evaluation is an unreliable process.…”
mentioning
confidence: 99%
“…Griffiths, 2008Griffiths, , 2010Thompson & Williamon, 2003;Thompson, Williamon, & Valentine, 2007). Thompson and Williamon (2003) included these measures in their work investigating methods of performance evaluation as they were taken directly from guidelines of the Associated Board of the Royal Schools of Music, a system ubiquitous in UK music education. Previous research has shown that these three concepts, although correlated, do appear to be distinct.…”
Section: The Current Studymentioning
confidence: 99%