2017
DOI: 10.1108/jd-05-2016-0066
|View full text |Cite
|
Sign up to set email alerts
|

Reliability and validity test of a Scoring Rubric for Information Literacy

Abstract: Purpose The purpose of this paper is to measure reliability and validity of the Scoring Rubric for Information Literacy (Van Helvoort, 2010).Design/methodology/approach Percentages of agreement and Intraclass Correlation were used to describe interrater reliability. For the determination of construct validity factor analysis and reliability analysis were used. Criterion validity was calculated with Pearson correlations.Findings In the described case, the Scoring Rubric for Information Literacy appears to be a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 20 publications
1
10
0
Order By: Relevance
“…Carrying out the validity process allows the obtainment of data to refine the rubric and make decisions on the adaptation of the criteria or their elimination, in a continual effort to improve in this way so it can be used, taking into account the empirical support. Subsequently, we will continue with the recommendations of the literature [14] and the reliability process, using methods, such as the agreement percentage between applicators, or the use of statistics, such as the intraclass correlation coefficient [15].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Carrying out the validity process allows the obtainment of data to refine the rubric and make decisions on the adaptation of the criteria or their elimination, in a continual effort to improve in this way so it can be used, taking into account the empirical support. Subsequently, we will continue with the recommendations of the literature [14] and the reliability process, using methods, such as the agreement percentage between applicators, or the use of statistics, such as the intraclass correlation coefficient [15].…”
Section: Discussionmentioning
confidence: 99%
“…Something important to point out is the problem of using rubrics without taking care of the systematic process of their elaboration [12], the reason why they are being used as an instrument to evaluate the performance in a task and for what activity or learning product their validity and reliability properties should be considered to be effective [13][14][15].…”
Section: Introductionmentioning
confidence: 99%
“…To determine the underlying structures by country, confirmatory factor analysis was applied (Van Helvoort et al, 2017). This procedure had previously been used by Mackey and Ho (2005), to identify dimensions of IL and information technologies; by ChanLin (2009) in library and information science (LIS) undergraduates; or by , who present a largescale study involving IL perceptions among social science students.…”
Section: Latent Structuresmentioning
confidence: 99%
“…A meta-analysis of the effectiveness of scoring rubrics for performance assessment found that rubrics improve both intra-and inter-rater reliability (Jonsson and Svingby 2007). Within the library setting, rubrics are used to evaluate student learning of information literacy standards (Wilson and Angel 2017;van Helvoort 2017). Rubrics are also useful for evaluating tools in a standardized way.…”
Section: Literature Reviewmentioning
confidence: 99%