Crowdsourcing 2019
DOI: 10.4018/978-1-5225-8362-2.ch004
|View full text |Cite
|
Sign up to set email alerts
|

Massive Open Program Evaluation

Abstract: Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. Are there new opportunities to expand user and stakeholder input, or involve others in e-learning program evaluation? This chapter asks researchers and practitioners to rethink existing paradigms and methods for program evaluation. Crowdsourced input may help leaders and stakeholders address persistent evaluation challenges and improve e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…Crowdsourcing is paradigmatically anchored (Amankwatia, 2019; Estellés-Arolas & González-Ladrón-de-Guevara, 2012), however, difficulties in ontology and epistemology were already signaled (Sivula & Kantola, 2015). However, most studies conducted on crowdsourcing government are conducted in the interpretative paradigm.…”
Section: Introductionmentioning
confidence: 99%
“…Crowdsourcing is paradigmatically anchored (Amankwatia, 2019; Estellés-Arolas & González-Ladrón-de-Guevara, 2012), however, difficulties in ontology and epistemology were already signaled (Sivula & Kantola, 2015). However, most studies conducted on crowdsourcing government are conducted in the interpretative paradigm.…”
Section: Introductionmentioning
confidence: 99%