2015
DOI: 10.1080/21650349.2015.1084893
|View full text |Cite
|
Sign up to set email alerts
|

A CAT with caveats: is the Consensual Assessment Technique a reliable measure of graphic design creativity?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
11
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(14 citation statements)
references
References 15 publications
3
11
0
Order By: Relevance
“…Cronbach's Alpha for each rated category was calculated using Wessa.net interrater agreement online calculator (Table 1). The combined interrater agreement for all items (Combined) of 0.68 was satisfactory, just slightly below the threshold of 0.7 for acceptable interrater agreement in design (Karl K. Jeffries, 2017). This internal consistency was interpreted to demonstrate that there was a sufficient consensus around overall quality, and good consistency in some qualities, to warrant further qualitative investigation.…”
Section: Inter-rater Agreement Of the Rated Qualitiesmentioning
confidence: 86%
“…Cronbach's Alpha for each rated category was calculated using Wessa.net interrater agreement online calculator (Table 1). The combined interrater agreement for all items (Combined) of 0.68 was satisfactory, just slightly below the threshold of 0.7 for acceptable interrater agreement in design (Karl K. Jeffries, 2017). This internal consistency was interpreted to demonstrate that there was a sufficient consensus around overall quality, and good consistency in some qualities, to warrant further qualitative investigation.…”
Section: Inter-rater Agreement Of the Rated Qualitiesmentioning
confidence: 86%
“…Once the linkographs are created, they could be evaluated using an information theoretic approach (Kan and Gero, 2017). The drawbacks of linkography and other traditional methods relying on expert raters is that they suffer from known inter-rater reliability issues (Jeffries, 2017) and cannot be automated for routine use by non-experts. To objectively and quantitatively study transcribed protocols of design conversations, here we apply a semantic analysis approach based on dynamic semantic networks of nouns.…”
Section: Introductionmentioning
confidence: 99%
“…Although good reliability has been found with as few as three judges (e.g., Rostan, 2010; Beaty et al, 2013), groups of 15 judges (c.f. e.g., Kaufman et al, 2013; Jeffries, 2017) are not uncommon. Inter-rater reliability statistics are influenced by the number of raters (Gwet, 2014), so it is possible that the agreement levels reported in published research may be inflated.…”
Section: Introductionmentioning
confidence: 99%
“…Dollinger and Shafran, 2005; Kaufman et al, 2013; Storme et al, 2014), there are applications where it has been less useful. For example, Jeffries (2017) reports varying levels of inter-rater reliability depending on the target graphic design task that is used with the CAT. While simpler tasks, such as manipulating text to creatively express one word, had high levels of inter-rater reliability, more complex tasks, such as designing a t-shirt graphic, had unacceptable levels of inter-rater reliability.…”
Section: Introductionmentioning
confidence: 99%