2020
DOI: 10.1136/bmjopen-2020-037324
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of risk-of-bias assessment approaches for selection of studies reporting prevalence for economic analyses

Abstract: ObjectivesWithin cost-effectiveness models, prevalence figures can inform transition probabilities. The methodological quality of studies can inform the choice of prevalence figures but no single obvious candidate tool exists for assessing quality of the observational epidemiological studies for selecting prevalence estimates. We aimed to compare different tools to assess the risk of bias of studies reporting prevalence, and develop and compare possible numerical scoring systems using these tools to set a thre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(19 citation statements)
references
References 35 publications
0
16
0
1
Order By: Relevance
“…Quality of included studies was rated by a reviewer (KM) using critical appraisal tools from the Joanna Briggs Institute (JBI) specific to RCTs and cohort studies, 16 and ratings were checked by a second reviewer (MVJ). To provide a concise metric for quantitative comparisons across designs, an overall quality proportion ( α q ) was calculated for each study using a count score 17 whereby the number of ‘yes’ items was divided by the total number of items on the appraisal tool (‘unclear’ items were counted as half, while ‘not applicable’ items were excluded). Proportions ranged from 0 to 1 and were interpreted in a manner similar to Cronbach's α 18 : α q ≥ 0.9 ( excellent ), α q ≥ 0.8 ( good ), α q ≥ 0.7 ( acceptable ) and α q < 0.7 ( poor ).…”
Section: Methodsmentioning
confidence: 99%
“…Quality of included studies was rated by a reviewer (KM) using critical appraisal tools from the Joanna Briggs Institute (JBI) specific to RCTs and cohort studies, 16 and ratings were checked by a second reviewer (MVJ). To provide a concise metric for quantitative comparisons across designs, an overall quality proportion ( α q ) was calculated for each study using a count score 17 whereby the number of ‘yes’ items was divided by the total number of items on the appraisal tool (‘unclear’ items were counted as half, while ‘not applicable’ items were excluded). Proportions ranged from 0 to 1 and were interpreted in a manner similar to Cronbach's α 18 : α q ≥ 0.9 ( excellent ), α q ≥ 0.8 ( good ), α q ≥ 0.7 ( acceptable ) and α q < 0.7 ( poor ).…”
Section: Methodsmentioning
confidence: 99%
“…However, the Cooper checklist does not allow an in-depth analysis of the quality of the clinical data used, especially for retrospective clinical data that are at a high risk of bias. Exploring this point, a recent study suggested using two checklists, the Joanna Briggs Institute (JBI) Checklist for Prevalence Studies and a modified version of Risk Of Bias In Non-randomised Studies of Interventions (ROBINS-I) (25); it would be relevant to combine the Cooper checklist with these two checklists. However, it must also be kept in mind that the more tools used for a single study, the more complicated and longer the assessment is.…”
Section: Discussionmentioning
confidence: 99%
“…Risk of bias for case-control and analytical cross-sectional studies was assessed using the Joanna Briggs Institute (JBI) Checklist for Case-Control [13] and Analytical Cross-Sectional Studies [14], respectively. A ≥50% cut-off score was used to indicate a low risk of J o u r n a l P r e -p r o o f bias [15]. For randomised controlled trials (RCTs), the revised Cochrane "Risk of bias" (RoB) tool (RoB 2.0) was utilised [16], based on which an overall summary RoB judgement (low; some concerns; high) for each specific outcome was derived [17].…”
Section: Reporting Bias Assessmentmentioning
confidence: 99%