2021
DOI: 10.12688/openreseurope.13339.1
|View full text |Cite
|
Sign up to set email alerts
|

The development of a four-tier test to evaluate research integrity training

Abstract: Although higher education institutions across Europe and beyond are paying more and more attention to research integrity training, there are few studies and little evidence on what works and what does not work in such training. One way to overcome this challenge is to evaluate such training with standardised instruments. Experts/trainers have used qualitative approaches to evaluate their research integrity training's successes, but it is difficult to compare their results with others. Sometimes they conduct st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…6. Although ethics and integrity can be differentiated (see e.g., Marusic et al 2016;Valkenburg et al 2021;Zollitsch et al 2021) and may require separate training, there is substantial overlap and in particular in the early stages of integrity education and training development, the terms have been used interchangeably in the literature. We use 'RCR training' as an umbrella term.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…6. Although ethics and integrity can be differentiated (see e.g., Marusic et al 2016;Valkenburg et al 2021;Zollitsch et al 2021) and may require separate training, there is substantial overlap and in particular in the early stages of integrity education and training development, the terms have been used interchangeably in the literature. We use 'RCR training' as an umbrella term.…”
Section: Discussionmentioning
confidence: 99%
“…Because institutions are, in most cases, organizing trainings themselves they are optimally positioned to design studies and collect empirical data about the effectiveness of trainings and about how they are evaluated by trainees. Nonetheless, although pre-post designs with control groups provide the most conclusive evidence for program effectiveness, a multitude of designs and instruments is used and often only one or two outcome criteria are used (e.g., Antes et al 2009;Marusic et al 2016;McIntosh et al 2018;Mumford et al 2015;Steele et al 2016;Zollitsch et al 2021). As a result, the available evidence is often unreliable, with a high risk of bias (Marusic et al 2016;Zollitsch et al 2021).…”
Section: Collect and Share Empirical Evidencementioning
confidence: 99%
See 2 more Smart Citations