2023
DOI: 10.1002/pds.5601
|View full text |Cite
|
Sign up to set email alerts
|

Validation to correct for outcome misclassification bias

Abstract: 1. Outcome validation is often requested by regulators to address misclassification bias in database studies of drug safety and comparative effectiveness.2. Validation studies commonly report only one positive predictive value (PPV) estimate.3. Since a high value of PPV does not imply misclassification bias is negligible, and a low value of PPV does not imply misclassification bias is important, this approach does not adequately address outcome misclassification bias.4. Validation should be designed to inform … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…Real-world data sources are complex, and the investigator must carefully consider whether the data on hand are sufficient to answer the research question. For example, a study that relies solely on claims data for outcome ascertainment may suffer from out-come misclassification bias (Lanes and Beachler 2023). This bias can be addressed through medical record validation for a random subset of patients, followed by quantitative bias analysis (Lanes and Beachler 2023).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Real-world data sources are complex, and the investigator must carefully consider whether the data on hand are sufficient to answer the research question. For example, a study that relies solely on claims data for outcome ascertainment may suffer from out-come misclassification bias (Lanes and Beachler 2023). This bias can be addressed through medical record validation for a random subset of patients, followed by quantitative bias analysis (Lanes and Beachler 2023).…”
Section: Discussionmentioning
confidence: 99%
“…As feasible, existing validation studies for the exposure and outcome should be referenced, or new validation efforts undertaken. The results of such validation studies can inform study estimates via quantitative bias analyses (Lanes and Beachler 2023). The study team may also consider biases arising from unmeasured confounding and plan quantitative bias analyses to explore how unmeasured confounding may impact estimates.…”
Section: Quality Control and Sensitivity Analyses (Step 8)mentioning
confidence: 99%
“…In comparison, the algorithm used in the present study was based on the algorithm developed in the Food and Drug Administration’s Mini-Sentinel study but included two steps: a screening algorithm to enable estimates of sensitivity and a predictive model algorithm. The screening algorithm had a PPV of 65% (95% CI, 60–71%) and a presumed sensitivity close to 100% ( 20 ). The performance characteristics for the predictive model algorithm (at the selected probability threshold) were a PPV of 94% (95% CI, 91–98%), sensitivity of 92%, and specificity of 89% ( 8 ).…”
Section: Discussionmentioning
confidence: 99%
“…Studies that use these algorithms should also consider quantitative bias analysis or other approaches to account for the influence of misclassification. 28,29 Our study was conducted using prescription data from EHR, and the algorithms tested may perform differently based on administrative dispensing data, though quantitative bias analysis suggested such misclassification would have little impact on algorithm performance. Similarly, because data were gathered from only three clinical institutions, the findings may not generalize to patients treated at other centers.…”
Section: Discussionmentioning
confidence: 99%
“…While the performance of refined algorithms was improved from that of the original algorithm, these algorithms were not completely accurate and were less accurate as proxies for stricter definitions of JIA flare. Studies that use these algorithms should also consider quantitative bias analysis or other approaches to account for the influence of misclassification 28,29 . Our study was conducted using prescription data from EHR, and the algorithms tested may perform differently based on administrative dispensing data, though quantitative bias analysis suggested such misclassification would have little impact on algorithm performance.…”
Section: Discussionmentioning
confidence: 99%