2021
DOI: 10.3346/jkms.2021.36.e198
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Approach for Active Vaccine Safety Monitoring

Abstract: Background Vaccine safety surveillance is important because it is related to vaccine hesitancy, which affects vaccination rate. To increase confidence in vaccination, the active monitoring of vaccine adverse events is important. For effective active surveillance, we developed and verified a machine learning-based active surveillance system using national claim data. Methods We used two databases, one from the Korea Disease Control and Prevention Agency, which contains f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…In terms of PV, explainable AI models on RWD could bring evidence about unknown confounders in an ADR and they could provide more informative results for PV experts about the causes of a potential PV signal. Although XAI methods' results are tested extensively in the healthcare domain, we found only 6 recent studies that were applied in the PV domain [16,18,22,23,49,51] (2 in 2021, 2 in 2022, and 2 in 2023). Another novel approach that is discussed extensively in the explainability field is the newly introduced causal machine/deep learning (CML/CDL).…”
Section: Artificial Intelligencementioning
confidence: 99%
“…In terms of PV, explainable AI models on RWD could bring evidence about unknown confounders in an ADR and they could provide more informative results for PV experts about the causes of a potential PV signal. Although XAI methods' results are tested extensively in the healthcare domain, we found only 6 recent studies that were applied in the PV domain [16,18,22,23,49,51] (2 in 2021, 2 in 2022, and 2 in 2023). Another novel approach that is discussed extensively in the explainability field is the newly introduced causal machine/deep learning (CML/CDL).…”
Section: Artificial Intelligencementioning
confidence: 99%
“…1,2 Studies already point out the importance of computational toxicology as a tool for the future of environmental health sciences and regulatory decisions in public health. 3,4 In fact, this tool combined with machine learning has a range of applications in many areas of science such as pharmacology, [5][6][7][8][9][10] genetics and biochemistry, [11][12][13][14][15] and drug discovery for COVID-19. 16 However, the model explainability in machine learning is a highly essential issue [17][18][19][20][21] because machine learning models are mostly considered as black boxes, 17,[21][22][23][24] indicating an ambitious challenge to the progress of machine learning.…”
Section: Introductionmentioning
confidence: 99%