2013
DOI: 10.4324/9781315004914
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Student Learning in Higher Education

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
166
0
17

Year Published

2017
2017
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 214 publications
(183 citation statements)
references
References 0 publications
0
166
0
17
Order By: Relevance
“…We also examine the role of formative on-line feedback on the usage of educational technology. As we said before, the important role of assessment and feedback in learning and teaching in higher education has been well recognised in literature (Brown, Bull & Pendlebury, 1997), (Gibbs & Simpson, 2004), (Nicol & Macfarlane Dick, 2006), - (Bloxham & Boyd, 2007). It is in this process that students' learning gets consolidated, which in turn produces persistent changes in students understanding.…”
Section: Introductionmentioning
confidence: 79%
“…We also examine the role of formative on-line feedback on the usage of educational technology. As we said before, the important role of assessment and feedback in learning and teaching in higher education has been well recognised in literature (Brown, Bull & Pendlebury, 1997), (Gibbs & Simpson, 2004), (Nicol & Macfarlane Dick, 2006), - (Bloxham & Boyd, 2007). It is in this process that students' learning gets consolidated, which in turn produces persistent changes in students understanding.…”
Section: Introductionmentioning
confidence: 79%
“…We drew from language of the NGSS, McNeill and Krajcik (2012) and Moje et al (2004) and our own previous work , to design the rubric and address reliability and validity. One of the major challenges to rubric reliability is interrater consistency (Brown, Bull, & Pendlebury, 1997), with alpha scores for interrater agreement greater than .70 considered sufficient (Jonsson & Svingby, 2017). In this study, three authors scored the display boards, which met the acceptable levels of agreement for interrater reliability (Baker, Abedi, Linn, & Niemi, 1996), with an interrater reliability of 0.88.…”
Section: Rubric Developmentmentioning
confidence: 99%
“…Science, technology, engineering, and mathematics (STEM) courses in higher education overwhelmingly use exam performance to assign course grades [1][2][3]; in STEM education literature, performance is often treated as equivalent to conceptual understanding of the course material [4][5][6]. The underlying assumption-shared by students, instructors, and institutions-is that performance metrics such as grades represent a quantifiable measurement of understanding [7][8][9][10][11], leading to considerable research on how to improve these metrics [12][13][14][15][16]. Furthermore, grades can have a major impact on student retention, especially for at-risk students, as low exam scores can be interpreted as evidence for exclusion [17][18][19].…”
Section: Introductionmentioning
confidence: 99%