1993
DOI: 10.2307/3345476
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Faculty, Peer, and Self-Evaluation of Applied Brass Jury Performances

Abstract: Authorities agree that peer evaluation and self-evaluation can help improve teaching performance. Evaluation of applied music skills, however, remains heavily teacher-centered. In this investigation, I explored the efficacy of peer and self-evaluation of applied brass jury performances. In three episodes at two locations, university faculty members evaluated live brass jury performances using an author-constructed Brass Performance Rating Scale (BPRS). Also using the BPRS, students rated these same performance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

11
61
3
3

Year Published

2004
2004
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 65 publications
(78 citation statements)
references
References 22 publications
11
61
3
3
Order By: Relevance
“…Bergee (1993Bergee ( , 1997 found that undergraduate performance majors evaluated collegiate-level solo performances as consistently as university teachers did. This is similar to findings of Mills (1987), who determined that the rankings of instrumentalists by music teachers were similar to those of specialist students and nonspecialists with music experience.…”
mentioning
confidence: 93%
See 2 more Smart Citations
“…Bergee (1993Bergee ( , 1997 found that undergraduate performance majors evaluated collegiate-level solo performances as consistently as university teachers did. This is similar to findings of Mills (1987), who determined that the rankings of instrumentalists by music teachers were similar to those of specialist students and nonspecialists with music experience.…”
mentioning
confidence: 93%
“…Piano faculty members were also no more consistent than were students when choosing paired comparisons of piano performers (Wapnick, Flowers, Alegant, & Jasinskas, 1993). Bergee (2003) later found that lack of experience in evaluating had no effect on reliability in faculty evaluation of undergraduate student performances. Burnsed and IGng (1987) found inconsistencies among expertjudges for tone and intonation.…”
mentioning
confidence: 94%
See 1 more Smart Citation
“…Each dimension on Form B was rated lower than its counterpart on Form A (since the number &dquo;1&dquo; is considered the &dquo;best&dquo; score, lower scores or ratings are indicated by higher numbers). Paired-samples t-tests revealed significant differences between forms at the .05 level or lower in the following dimensions: tone ( t = -2.27, p = .027), diction ( t = -2.40, p=.02), blend ( t = -3.36, p = .001 ) , intonation ( t = -2.34, p = .023), rhythm ( t = -2.80, p = .007), balance ( = -4.09, p < .001 ) , total score ( = -3.94, p < .001), and rating ( (Garman, Barry, & DeCarbo, 1991;Bergee, 1988Bergee, , 1989Bergee, , 1993Bergee, , 1997Bergee, , 2003 The additional analysis of the means of dimensions, total score, and overall ratings corroborates the above comments (see Table 1 and above t-test results). Form B yielded significantly different ratings in every dimension except interpretation, suggesting that the adjudicators in this setting rated the choirs more severely when using Form B.…”
mentioning
confidence: 99%
“…Very little is known about the creative process in group discussion by studio artists, let alone in remote discussion. The very few studies [2,15] that have examined peer assessment in artistic settings have been in the arena of music performance, and so it is unclear how they apply. Findings from the peer assessment studies and remote brainstorming studies have been in non-artistic topics and settings, and the nature of the brainstorming and decision-making have been different from what artists do: the studio art design setting involves visual design that includes esthetic judgment.…”
Section: Feedback In Studio Art Groupsmentioning
confidence: 98%