2022
DOI: 10.3390/jintelligence10040102
|View full text |Cite
|
Sign up to set email alerts
|

Stop Worrying about Multiple-Choice: Fact Knowledge Does Not Change with Response Format

Abstract: Declarative fact knowledge is a key component of crystallized intelligence. It is typically measured with multiple-choice (MC) items. Other response formats, such as open-ended formats are less frequently used, although these formats might be superior for measuring crystallized intelligence. Whereas MC formats presumably only require recognizing the correct response to a question, open-ended formats supposedly require cognitive processes such as searching for, retrieving, and actively deciding on a response fr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 88 publications
1
2
0
Order By: Relevance
“…We went beyond previous research regarding Gc and Gr, as our Gc factor was indicated through a broad range of vocabulary and declarative knowledge tests in different response formats (i.e., multiple-choice vs. open-ended). Further, the reported measurement model of Gc replicates earlier research showing that tests of vocabulary and declarative knowledge share a common core (Schipolowski et al, 2014) and that response format plays no role in individual differences in declarative knowledge (Goecke et al, 2022). The predictive power of Gc toward Gr implies that Gc explains incremental variance in Gr that is not accounted for by the other covariates.…”
Section: Discussionsupporting
confidence: 79%
See 1 more Smart Citation
“…We went beyond previous research regarding Gc and Gr, as our Gc factor was indicated through a broad range of vocabulary and declarative knowledge tests in different response formats (i.e., multiple-choice vs. open-ended). Further, the reported measurement model of Gc replicates earlier research showing that tests of vocabulary and declarative knowledge share a common core (Schipolowski et al, 2014) and that response format plays no role in individual differences in declarative knowledge (Goecke et al, 2022). The predictive power of Gc toward Gr implies that Gc explains incremental variance in Gr that is not accounted for by the other covariates.…”
Section: Discussionsupporting
confidence: 79%
“…Given vocabulary and fact knowledge measures of Gc are very strongly related (Schipolowski et al, 2014), their relation with Gr should be next to indistinguishable. Interestingly, comparing open-ended and closed fact knowledge response formats (i.e., varying the retrieval demands in Gc tasks) did not reveal meaningful differences in correlations in a recent study (Goecke et al, 2022), which underlines the idea that Gc and Gr should be related independent of task instantiations.…”
Section: Crystallized Intelligencementioning
confidence: 68%
“…PUBLIC EVENTS KNOWLEDGE 13 with divergent biographies or cultural backgrounds (Watrin et al, 2023). Furthermore, the present study focuses on the recognition of public events using a multiple-choice response format rather than an open-ended one, which has been reported to be more agesensitive ; see also Goecke et al, 2022, for the structural consistency of closed and open-ended standardized knowledge tests). We deem it important to complement and further explain the current results by concurrently investigating closed and open-ended response formats to further explore a potential reminiscence bump.…”
Section: Limitationsmentioning
confidence: 99%
“…The assessment typically involves multiple-choice items (Emden et al, 2018;Neumann et al, 2013) or ordered multiple-choice items (Briggs et al, 2006;Todd et al, 2017). Chen et al (2016) and Goecke et al (2022) have discussed the influence of item formats. Goecke et al (2022) state that the method of inquiry does not affect what is measured with different response formats, while Chen et al (2016) opt for mixed formats.…”
Section: Development Of Learning Progressionsmentioning
confidence: 99%
“…Chen et al (2016) and Goecke et al (2022) have discussed the influence of item formats. Goecke et al (2022) state that the method of inquiry does not affect what is measured with different response formats, while Chen et al (2016) opt for mixed formats. A review by Harris et al (2022) shows that the majority of studies utilizes short-answer or fixed response tasks.…”
Section: Development Of Learning Progressionsmentioning
confidence: 99%