2010
DOI: 10.1007/s10649-010-9242-9
|View full text |Cite
|
Sign up to set email alerts
|

The relation between types of assessment tasks and the mathematical reasoning students use

Abstract: The relation between types of tasks and the mathematical reasoning used by students trying to solve tasks in a national test situation is analyzed. The results show that when confronted with test tasks that share important properties with tasks in the textbook the students solved them by trying to recall facts or algorithms. Such test tasks did not require conceptual understanding. In contrast, test tasks that do not share important properties with the textbook mostly elicited creative mathematically founded r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
70
0
15

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 93 publications
(87 citation statements)
references
References 10 publications
2
70
0
15
Order By: Relevance
“…The first two methods allow estimating the so-called statistical pure contribution of each task to the general variation of test scores, while the factor analysis is a good method for checking the homogeneity of the test (Bortz & Döring, 2005;Avanessov, 2009;Prado et al, 2010;Lim & Chapman, 2013). As a result, these methods indicate suitability, or unfitness of the considered test (Boesen & Palm, 2010;Xia, Liang & Wu, 2017). However, these and other studies of domestic and foreign authors do not consider the development of an algorithm for calibrating test tasks and determining the scale of testees' knowledge level.…”
Section: Methods and Datamentioning
confidence: 99%
“…The first two methods allow estimating the so-called statistical pure contribution of each task to the general variation of test scores, while the factor analysis is a good method for checking the homogeneity of the test (Bortz & Döring, 2005;Avanessov, 2009;Prado et al, 2010;Lim & Chapman, 2013). As a result, these methods indicate suitability, or unfitness of the considered test (Boesen & Palm, 2010;Xia, Liang & Wu, 2017). However, these and other studies of domestic and foreign authors do not consider the development of an algorithm for calibrating test tasks and determining the scale of testees' knowledge level.…”
Section: Methods and Datamentioning
confidence: 99%
“…Uppgifterna behandlade dessutom olika matematiska områden, som eleverna inte nödvändigtvis hade arbetat med nyligen. Kriterierna för att urskilja en icke-rutinuppgift har tidigare använts av Boesen et al (2010) för att kategorisera en prov uppgift som krävandes ett kreativt matematiskt resonemang i relation till den av eleverna använda läroboken. Detta innebar att en lösningsmetod för en uppgift inte skulle förekomma fler än tre gånger i boken.…”
Section: Kategorisering Av Resonemang I Läromedelsuppgifterunclassified
“…Resonemangsramverket (Lithner, 2008), har tidigare använts i liknande studier i andra kontexter (Boesen et al, 2010;Lithner, 2004;Sumpter, 2013). Då elevers faktiska resonemang analyseras underlättar en uppdelning av arbetsgången i olika moment, vilka tillsammans utgör en resonemangssekvens.…”
Section: Kategorisering Av Elevresonemangunclassified
See 2 more Smart Citations