2023
DOI: 10.1038/s41598-023-44897-8
|View full text |Cite
|
Sign up to set email alerts
|

Application of multiple-finding segmentation utilizing Mask R-CNN-based deep learning in a rat model of drug-induced liver injury

Eun Bok Baek,
Jaeku Lee,
Ji-Hee Hwang
et al.

Abstract: Drug-induced liver injury (DILI) presents significant diagnostic challenges, and recently artificial intelligence-based deep learning technology has been used to predict various hepatic findings. In this study, we trained a set of Mask R-CNN-based deep algorithms to learn and quantify typical toxicant induced-histopathological lesions, portal area, and connective tissue in Sprague Dawley rats. We compared a set of single-finding models (SFMs) and a combined multiple-finding model (MFM) for their ability to sim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…The interpretability and explainability of artificial intelligence (AI) models are critical in the medical arena since healthcare practitioners demand insights into the model’s decision-making process 32,33 . Deep learning models, particularly neural networks, have been criticized for their “black-box” nature, which makes it difficult to grasp the logic behind the predictions made by these approaches 34,35,36,37,38,39,40 . This study intends to overcome these important issues by proposing reliable, explainable, and thus more transparent methods for exploring cutting-edge deep-learning techniques for medical research and practice.…”
Section: Introductionmentioning
confidence: 99%
“…The interpretability and explainability of artificial intelligence (AI) models are critical in the medical arena since healthcare practitioners demand insights into the model’s decision-making process 32,33 . Deep learning models, particularly neural networks, have been criticized for their “black-box” nature, which makes it difficult to grasp the logic behind the predictions made by these approaches 34,35,36,37,38,39,40 . This study intends to overcome these important issues by proposing reliable, explainable, and thus more transparent methods for exploring cutting-edge deep-learning techniques for medical research and practice.…”
Section: Introductionmentioning
confidence: 99%