2021
DOI: 10.1101/2021.10.12.464160
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeepImageTranslator V2: analysis of multimodal medical images using semantic segmentation maps generated through deep learning

Abstract: Introduction: Analysis of multimodal medical images often requires the selection of one or many anatomical regions of interest (ROIs) for extraction of useful statistics. This task can prove laborious when a manual approach is used. We have previously developed a user-friendly software tool for image-to-image translation using deep learning. Therefore, we present herein an update to the DeepImageTranslator software with the addiction of a tool for multimodal medical image segmentation analysis (hereby referred… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…In the field of radiology, deep learning models have been used to analyze medical images such as X-rays, CT scans, and MRIs for the diagnosis and treatment of various conditions [23][24][25]. ViTs have also been explored for use in natural language processing tasks, such as extracting information from electronic health records and detecting adverse drug events [26][27][28].…”
Section: Discussionmentioning
confidence: 99%
“…In the field of radiology, deep learning models have been used to analyze medical images such as X-rays, CT scans, and MRIs for the diagnosis and treatment of various conditions [23][24][25]. ViTs have also been explored for use in natural language processing tasks, such as extracting information from electronic health records and detecting adverse drug events [26][27][28].…”
Section: Discussionmentioning
confidence: 99%
“…Since the DDSM mammograms are in LJPEG formats, the Stanford PVRG JPEG codec v1.1 was employed to read DDSM images and convert them into 16-bit grayscale PNG images (13). CBIS-DDSM and EMBED images are in DICOM format and were converted into 16-bit grayscale PNG files (22)(23)(24)(25). All images were rescaled to 800 × 600 with bicubic interpolation and anti-aliasing to make them fit into 8 GB memory Graphic Processing Units (GPUs) for improved reproducibility.…”
Section: Data Preprocessing and Methodsmentioning
confidence: 99%
“…Moreover, the work by Saeed Roshani et al [8] implies that sensor technology could be a potent tool for diagnosing early-stage breast cancer. En Zhou Ye et al, [9] found that employing deep learning-generated semantic segmentation mapping analysis of multimodal medical imagery could aid in diagnosing malignant breast tissue. Although mammography is considered the most efficacious diagnostic tool for breast cancer, it is not devoid of risks such as false positives, radiation exposure, and discomfort associated with the procedure.…”
Section: Introductionmentioning
confidence: 99%