2018
DOI: 10.4102/sajhrm.v16i0.1000
|View full text |Cite
|
Sign up to set email alerts
|

A micro-level outcomes evaluation of a skills capacity intervention within the South African public service: Towards an impact evaluation

Abstract: Orientation: Interest in measuring the impact of skills development interventions has increased in recent years. Research purpose: This article reports on an outcomes evaluation under the ambit of an impact assessment with reference to a research methodology workshop. Motivation of the study: A paucity of studies could be found measuring the workshop outcomes, especially within the public service as it pertains to training interventions. Research approach/design and method: A pretest–post-test research design … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
7
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(9 citation statements)
references
References 14 publications
1
7
1
Order By: Relevance
“…Bartlett’s test of sphericity was statistically significant (X2 = 7963.469; DF = 2278; p = 0.000 ∗∗ ), indicative of sufficient correlation between variables to substantiate exploratory factor analysis. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy returned a value of 0.892, providing evidence that the sample size was acceptable ( Jonck et al, 2018 ). As factor analysis can be influenced by outliers, normality tests with plots were performed to determine whether the 5% trimmed mean values differ from the mean values ( Pallant, 2011 ).…”
Section: Methodsmentioning
confidence: 99%
“…Bartlett’s test of sphericity was statistically significant (X2 = 7963.469; DF = 2278; p = 0.000 ∗∗ ), indicative of sufficient correlation between variables to substantiate exploratory factor analysis. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy returned a value of 0.892, providing evidence that the sample size was acceptable ( Jonck et al, 2018 ). As factor analysis can be influenced by outliers, normality tests with plots were performed to determine whether the 5% trimmed mean values differ from the mean values ( Pallant, 2011 ).…”
Section: Methodsmentioning
confidence: 99%
“…Quasi-experimental designs have been used extensively to determine the effectiveness of training interventions (see, e.g., Brutus & Donia 2010;Fjuk & Kvale 2018;Shannonhouse et al 2017). Elaborating on the stated designs, the fundamental assumption of an impact assessment is that an intervention has defined outcomes (Jonck et al 2018). White and Sabarwal (2014) noted that quasi-experimental research designs test causality in which the workshop is viewed as an 'intervention' evaluated to ascertain the efficacy thereof, measured by a predetermined measuring instrument.…”
Section: Methodsmentioning
confidence: 99%
“…The aim of the research under study was to evaluate the impact of a research methodology skills development intervention on the knowledge of participants. For the sake of clarity, an impact evaluation in accordance with Rogers (2012) as cited in Jonck, De Coning and Radikonyana (2018:2) can be defined as any evaluation that systematically and empirically investigates the impact produced by an intervention. Moreover, Rogers (2014) emphasised the importance of the aforementioned evaluation which is to provide empirical evidence about the change (if any) that can be attributed to the intervention and could be undertaken on a capacitybuilding workshop.…”
Section: Aim and Objectivesmentioning
confidence: 99%
See 2 more Smart Citations