2017
DOI: 10.1016/j.paid.2017.02.015
|View full text |Cite
|
Sign up to set email alerts
|

Reliability and completion speed in online questionnaires under consideration of personality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 55 publications
0
4
0
Order By: Relevance
“…It is worth stating that the questionnaire was used as the main assessment tool in this study. The structure and entries of the questionnaire were based on previous studies by Harms (2017) and Tsai (2020) et al in which its reliability and validity in similar learning environments were verified in detail [48,49], and professors on campus helped to verify the relevance of the entries and the clarity of the representations. In order to further adapt the questionnaire to the present specific context of the study, the questionnaire was fine-tuned according to the results of the pretest, which involved 30 learners and showed statistically good internal consistency (Cronbach's α = 0.87).…”
Section: Methodsmentioning
confidence: 99%
“…It is worth stating that the questionnaire was used as the main assessment tool in this study. The structure and entries of the questionnaire were based on previous studies by Harms (2017) and Tsai (2020) et al in which its reliability and validity in similar learning environments were verified in detail [48,49], and professors on campus helped to verify the relevance of the entries and the clarity of the representations. In order to further adapt the questionnaire to the present specific context of the study, the questionnaire was fine-tuned according to the results of the pretest, which involved 30 learners and showed statistically good internal consistency (Cronbach's α = 0.87).…”
Section: Methodsmentioning
confidence: 99%
“…We incorporate controls for two alternative explanations. Although there is mixed evidence in the effect of personality traits on response latency (Harms, Jackel, and Montag 2017), we control in all models for the respondent’s need for cognition, with the expectation that respondents high in need for cognition will be more reluctant to answer DK and thus employ more time to answer. In addition, we control for the total duration of the survey for each respondent, as a proxy of the respondent’s general speed.…”
Section: Methodsmentioning
confidence: 99%
“…We calculated response time as the difference between the time that the first page of the survey was loaded (i.e., when participants completed the instructions and consent page) and the time that participants clicked submit on the last page of the survey. Previous researchers have suggested that IER occurs when a participant's completion time is two standard deviations above or below the survey's mean completion time (e.g., Heerwegh, 2003); is 1.5 interquartile ranges lower than the first quartile or higher than the third quartile (e.g., Funke, 2016); is lower than the first percentile (e.g., Gummer & Roßmann, 2015); or is lower than the fifth percentile (e.g., Harms et al, 2017). To create our conservative cutoff and ensure we had at least 100 IER participants for our analyses (VanVoorhis & Morgan, 2007), we identified participants who were one standard deviation below the mean completion time as having responded too quickly (response time-short; n = 118, 18.90%) and one standard deviation above the mean completion time having responded too slowly (response time-long; n = 143, 22.90%).…”
Section: Reactive Indicesmentioning
confidence: 99%