2018
DOI: 10.1088/1361-6560/aaf44b
|View full text |Cite
|
Sign up to set email alerts
|

Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network

Abstract: Automatic tumor segmentation from medical images is an important step for computer-aided cancer diagnosis and treatment. Recently, deep learning has been successfully applied to this task, leading to state-of-the-art performance. However, most of existing deep learning segmentation methods only work for a single imaging modality. PET/CT scanner is nowadays widely used in the clinic, and is able to provide both metabolic information and anatomical information through integrating PET and CT into the same utility… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
88
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 152 publications
(89 citation statements)
references
References 46 publications
0
88
0
1
Order By: Relevance
“…for the use of MATV as prognostic factor in lymphoma patients or to use metabolic information to measure treatment response [17]. Furthermore, the strategies can, for example, also be used for the fast generation of reliable training sets for Convolutional Neural Networks (CNN) which are used more and more frequently for segmentation tasks [18][19][20]. The aim of this study was to investigate the potential improvements in the inter-observer variability of tumor segmentation results using these new workflows compared with more standard segmentation approaches, while allowing for the generation of plausible and reliable segmentations.…”
Section: Plos Onementioning
confidence: 99%
“…for the use of MATV as prognostic factor in lymphoma patients or to use metabolic information to measure treatment response [17]. Furthermore, the strategies can, for example, also be used for the fast generation of reliable training sets for Convolutional Neural Networks (CNN) which are used more and more frequently for segmentation tasks [18][19][20]. The aim of this study was to investigate the potential improvements in the inter-observer variability of tumor segmentation results using these new workflows compared with more standard segmentation approaches, while allowing for the generation of plausible and reliable segmentations.…”
Section: Plos Onementioning
confidence: 99%
“…Automatic nasopharyngeal carcinoma tumor segmentation from 18 F-FDG PET/CT scans using a U-Net architecture proved the feasibility of this task (dice coefficient = 0.87) using AI-based algorithms (Zhao et al 2019a). Similar studies on head and neck (Huang et al 2018), as well as lung cancers (Zhao et al 2018) exhibited promising results using convolutional neural networks for automated tumor segmentation from PET/CT images. Nevertheless, fully automated lesion delineation from PET, CT, and MR images or any combination of these images still remains a major challenge owing to the large variability of lesion shape and uptake associated with various malignant diseases.…”
Section: Image Segmentationmentioning
confidence: 72%
“…Zhao et al showed, for a small group of 30 patients, that the automatic segmentation of such tumors on 18 F-FDG PET/CT data was, in principle, possible using the U-Net architecture (mean Dice score of 87.47%) (44). Other groups applied similar approaches to head and neck cancer (45) and lung cancer (46,47). Still, fully automated tumor segmentation remains a challenge, probably because of the extremely diverse appearance of these diseases.…”
Section: Discussionmentioning
confidence: 99%