2015
DOI: 10.1016/j.evalprogplan.2014.08.003
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchy of evidence and appraisal of limitations (HEAL) grading system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 50 publications
0
11
0
Order By: Relevance
“…There is also a pressing need to adapt existing ‘risk of bias’ and ‘quality of evidence’ assessment tools for behavioral interventions 44, 45, 46. For example, alternative strategies to minimize performance bias (such as preventing participant and provider awareness of primary research hypotheses) could be developed by trial researchers and then used by systematic reviewers in risk of bias assessments when blinding participants and providers is not an option 28.…”
Section: Suggestions Moving Forwardmentioning
confidence: 99%
“…There is also a pressing need to adapt existing ‘risk of bias’ and ‘quality of evidence’ assessment tools for behavioral interventions 44, 45, 46. For example, alternative strategies to minimize performance bias (such as preventing participant and provider awareness of primary research hypotheses) could be developed by trial researchers and then used by systematic reviewers in risk of bias assessments when blinding participants and providers is not an option 28.…”
Section: Suggestions Moving Forwardmentioning
confidence: 99%
“…Konan et al [18] and Goossens et al [20] studies also reported that the Thessaly's test was not clinically useful compared to Karachalios et al [8]. This may be due to the fact that Konan et al and Goossens et al employed a cohort design, apart from a cohort study being a lower strength according the hierarchy scale of evidence [21], there may also be the possibly of a cohort study to be somewhat unbalanced and biased [18][19][20]22] compared to Karachalios et al, who employed a randomised, cross over design. Such a design ensures there is no bias in the testing of diagnostic accuracy and follows the appropriate methodology for assessing diagnostic tests [23].…”
Section: Discussionmentioning
confidence: 78%
“…Our advocacy for the use of implementation science to study SCPs—a mandated clinical intervention for which evidence from RCTs is lacking—is congruent with calls for valuing a wider variety of research designs 52,53 and expanding conceptualizations of evidence. 54 Survivors and providers have first-hand experience with the processes and contexts in which SCPs are used.…”
mentioning
confidence: 91%