2016
DOI: 10.1016/j.jclinepi.2015.08.013
|View full text |Cite
|
Sign up to set email alerts
|

An algorithm was developed to assign GRADE levels of evidence to comparisons within systematic reviews

Abstract: ObjectivesOne recommended use of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach is supporting quality assessment of evidence of comparisons included within a Cochrane overview of reviews. Within our overview, reviewers found that current GRADE guidance was insufficient to make reliable and consistent judgments. To support our ratings, we developed an algorithm to grade quality of evidence using concrete rules.MethodsUsing a pragmatic, exploratory approach, we explored th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
134
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 125 publications
(135 citation statements)
references
References 14 publications
1
134
0
Order By: Relevance
“…There is an absence of guidance on how to apply GRADE within an overview. Authors using GRADE faced challenges relating to the number of comparisons, and subtle differences between comparisons, which created issues in terms of workload and in relation to achievement consistency.Summary of findings from Ballard [3]:Pollock [19] identified challenges in consistent application of the GRADE approach to large volumes of evidence synthesised within overviews [37], and proposedla more algorithmic approach to judging quality of evidence within reviews [10]. There remains debate about the validity of this proposed approach [11, 12], but Pollock [19] argues that approach does arguably facilitate transparency and consistency when faced with judging the quality of evidence of many similar (but not identical) comparisons included within reviews.• “emerging debate related to (iii) evaluating the quality and reporting of included research” (“quality of the body of evidence across included systematic reviews”)• GRADE has been described as an approach for assessing the quality of the body of evidence accrsoss systematic reviews, but there is currently a lack of guidance to ensure appropriate use and interpretation of GRADE when applied in this wayMcClurg [21] repeated this method and developed algorithm using the method recommended by Pollock [19], but involving a wider group of people in the decision making, including statisticians and clinicians.Judgement of complementarityAgreementEstcourt [20] used GRADE levels of evidence from within included reviews.…”
Section: Resultsmentioning
confidence: 99%
“…There is an absence of guidance on how to apply GRADE within an overview. Authors using GRADE faced challenges relating to the number of comparisons, and subtle differences between comparisons, which created issues in terms of workload and in relation to achievement consistency.Summary of findings from Ballard [3]:Pollock [19] identified challenges in consistent application of the GRADE approach to large volumes of evidence synthesised within overviews [37], and proposedla more algorithmic approach to judging quality of evidence within reviews [10]. There remains debate about the validity of this proposed approach [11, 12], but Pollock [19] argues that approach does arguably facilitate transparency and consistency when faced with judging the quality of evidence of many similar (but not identical) comparisons included within reviews.• “emerging debate related to (iii) evaluating the quality and reporting of included research” (“quality of the body of evidence across included systematic reviews”)• GRADE has been described as an approach for assessing the quality of the body of evidence accrsoss systematic reviews, but there is currently a lack of guidance to ensure appropriate use and interpretation of GRADE when applied in this wayMcClurg [21] repeated this method and developed algorithm using the method recommended by Pollock [19], but involving a wider group of people in the decision making, including statisticians and clinicians.Judgement of complementarityAgreementEstcourt [20] used GRADE levels of evidence from within included reviews.…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, AMSTAR-2 proposes a four-level scheme (high, moderate, low, and critically low) for appraisers to rate the overall confidence in the results of a systematic review, and each item was evaluated using three evaluation options, "yes," "partial yes," and "no." e Grading of Recommendation, Assessment, Development, and Evaluation (GRADE) [12] was used to assess the quality of evidence by two reviewers (JK-H and M-S) independently. e following criteria were taken into account: risk of bias (that is study limitations), inconsistencies, indirectness, inaccuracy, and publication bias [13].…”
Section: Quality Assessment Two Reviewers (Jk-h and M-s)mentioning
confidence: 99%
“…Empirical evidence is lacking on the optimal tool for assessing risk of bias or methodological quality of included systematic reviews, and how these tools might best be applied in overviews of reviews [30,31]. Guidance remains limited on how to extract and use appraisals of the quality of primary studies within the included systematic reviews and how to adapt GRADE methodology to overviews of reviews [7,23]. The challenges that overview authors reportedly face are often related to the steps where guidance is inadequate or con icting.…”
Section: Discussionmentioning
confidence: 99%
“…On 7 March 2019 we conducted an iterative reference tracking ('snowballing') search [21,22]. We used 46 target articles, including all published articles and abstracts cited in the 2016 scoping review [23], as well as other recent relevant articles known to the research team. For each target article, we searched for 'citing' references in Google Scholar and Scopus and for 'similar articles' in PubMed from 1 January 2014 to present.…”
Section: Searchesmentioning
confidence: 99%