2016
DOI: 10.1187/cbe.15-07-0142
|View full text |Cite
|
Sign up to set email alerts
|

A Primer for Developing Measures of Science Content Knowledge for Small-Scale Research and Instructional Use

Abstract: This essay, intended for faculty involved in small-scale projects, courses, or educational research, provides a step-by-step guide to the process of developing, scoring, and validating content knowledge assessments. The authors illustrate their discussion with examples from their measures of high school students’ understanding of cell biology and epigenetics.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(25 citation statements)
references
References 39 publications
0
24
0
1
Order By: Relevance
“…We developed the EcoEvo-MAPS questions through an iterative process ( NRC, 2001 ; Adams and Wieman, 2011 ; Bass et al , 2016 ) similar to that used for other biology concept assessments to optimize assessment validity ( Smith et al , 2008 ; Price et al , 2014 ; Couch et al , 2015b ). This approach involved multiple cycles of revision, including feedback from both students and faculty experts ( Table 1 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We developed the EcoEvo-MAPS questions through an iterative process ( NRC, 2001 ; Adams and Wieman, 2011 ; Bass et al , 2016 ) similar to that used for other biology concept assessments to optimize assessment validity ( Smith et al , 2008 ; Price et al , 2014 ; Couch et al , 2015b ). This approach involved multiple cycles of revision, including feedback from both students and faculty experts ( Table 1 ).…”
Section: Methodsmentioning
confidence: 99%
“…These core concepts were developed following conversations with more than 500 stakeholders in biology education, are supported by several national funding agencies, and overlap with the Next Generation Science Standards ( NGSS Lead States, 2013 ) for K–12 education. While these assessments follow the methodology used for the development of concept inventories ( NRC, 2001 ; Adams and Wieman, 2011 ; Bass et al ., 2016 ), they differ in covering a wide breadth of concepts and are designed to measure student learning in cohorts of students at different time points in the undergraduate program.…”
Section: Introductionmentioning
confidence: 99%
“…To populate each category in our assessment we selected only the questions from TIPS II with the highest item discrimination index (as reported in Kazeni, 2005), a statistical measure that distinguishes between high performing and low performing examinees for a given assessment. The average item discrimination index of questions selected from TIPS II was 0.4, well above the acceptable range (>0.3; Bass et al, 2016). Student scores on these fifteen TIPS II items were averaged to create the main outcome variable in this study.…”
Section: Research Studymentioning
confidence: 81%
“…To populate each category in our assessment, we selected only the questions from TIPSII with the highest item discrimination index (as reported in Kazeni, 2005 ), a statistical measure that distinguishes between high-performing and low-performing examinees for a given assessment. The average item discrimination index of questions selected from TIPSII was 0.4, well above the acceptable range (>0.3; Bass et al , 2016 ). Student scores on these 15 TIPSII items were averaged to create the main outcome variable in this study (Supplemental Material 5).…”
Section: Research Studymentioning
confidence: 82%