2014
DOI: 10.1002/jrsm.1128
|View full text |Cite
|
Sign up to set email alerts
|

Quality assessment of qualitative evidence for systematic review and synthesis: Is it meaningful, and if so, how should it be performed?

Abstract: The critical appraisal and quality assessment of primary research are key stages in systematic review and evidence synthesis. These processes are driven by the need to determine how far the primary research evidence, singly and collectively, should inform findings and, potentially, practice recommendations. Quality assessment of primary qualitative research remains a contested area. This article reviews recent developments in the field charting a perceptible shift from whether such quality assessment should be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
167
0
3

Year Published

2016
2016
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 173 publications
(170 citation statements)
references
References 22 publications
0
167
0
3
Order By: Relevance
“…While critical appraisal has come to be an expected element of qualitative‐evidence synthesis, it is contentious because of the epistemological variety of qualitative research, the diversity of appraisal tools, the variability in ratings within as well as between tools (Carroll & Booth, ; Dixon‐Woods et al, ), and the fact that such tools do not measure conceptual quality (Toye, Seers, & Barker, ). Applying the CASP checklist for qualitative studies to mixed‐methods studies is problematic because the latter should be evaluated as a whole, given that the strengths of one strand can compensate for deficiencies of the other (Heyvaert, Hannes, Maes, & Onghena, ).…”
Section: Discussionmentioning
confidence: 99%
“…While critical appraisal has come to be an expected element of qualitative‐evidence synthesis, it is contentious because of the epistemological variety of qualitative research, the diversity of appraisal tools, the variability in ratings within as well as between tools (Carroll & Booth, ; Dixon‐Woods et al, ), and the fact that such tools do not measure conceptual quality (Toye, Seers, & Barker, ). Applying the CASP checklist for qualitative studies to mixed‐methods studies is problematic because the latter should be evaluated as a whole, given that the strengths of one strand can compensate for deficiencies of the other (Heyvaert, Hannes, Maes, & Onghena, ).…”
Section: Discussionmentioning
confidence: 99%
“…Disagreements between researchers on quality ratings were resolved through discussion. Articles were not scored or graded, nor were any articles excluded from the synthesis as a result of a lack of consensus on a validated and reliable method for assigning a numerical score and/or excluding qualitative studies from systematic reviews (60,66,67) . Aspects of each study were assessed as being either adequate/appropriate or inadequate/inappropriate against each of 22 criteria included in the RATS guidelines (65) .…”
Section: Quality Appraisalmentioning
confidence: 99%
“…The overarching theme encompassing the main themes captured the sentiment that students are navigating their way through dietetics education programmes. The (67) is provided. † Lordly and MacLellan are two papers from the same study.…”
Section: Overarching Theme: Navigating Through the Ups And Downsmentioning
confidence: 99%
“…In systematic reviews this is typically mentioned as one step of the critical appraisal. However, to date, such critical appraisal is often implicit, based on criteria varying for every systematic review (Collaboration for Environmental Evidence , Carroll and Booth , Stewart and Schmid ). We therefore introduce an evidence assessment tool providing a clear appraisal guideline to score the reliability of individual studies.…”
Section: Introductionmentioning
confidence: 99%