2015
DOI: 10.1007/978-3-319-24571-3_7
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Mid-Level Semantic Boundary Cues for Automated Lymph Node Detection

Abstract: Abstract. Histograms of oriented gradients (HOG) are widely employed image descriptors in modern computer-aided diagnosis systems. Built upon a set of local, robust statistics of low-level image gradients, HOG features are usually computed on raw intensity images. In this paper, we explore a learned image transformation scheme for producing higher-level inputs to HOG. Leveraging semantic object boundary cues, our methods compute data-driven image feature maps via a supervised boundary detector. Compared with t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 30 publications
(15 citation statements)
references
References 15 publications
0
15
0
Order By: Relevance
“…Many of our CNN models achieve notably better (FROC-AUC and TPR/3FP) results than the previous state-of-the-art models [36] for mediastinal LN detection: GoogLeNet-RI-L obtains an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${\rm AUC}=0.95$\end{document} and 0.85 TPR/3FP, versus \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${\rm AUC}=0.92$\end{document} and 0.70 TPR/3FP [22] and 0.78 TPR/3FP [36] which uses stacked shallow learning. This difference lies in the fact that annotated lymph node segmentation masks are required to learn a mid-level semantic boundary detector [36] , whereas CNN approaches only need LN locations for training [22] . In abdominal LN detection, [22] obtains the best trade-off between its CNN model complexity and sampled data configuration.…”
Section: Evaluations and Discussionmentioning
confidence: 85%
See 2 more Smart Citations
“…Many of our CNN models achieve notably better (FROC-AUC and TPR/3FP) results than the previous state-of-the-art models [36] for mediastinal LN detection: GoogLeNet-RI-L obtains an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${\rm AUC}=0.95$\end{document} and 0.85 TPR/3FP, versus \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${\rm AUC}=0.92$\end{document} and 0.70 TPR/3FP [22] and 0.78 TPR/3FP [36] which uses stacked shallow learning. This difference lies in the fact that annotated lymph node segmentation masks are required to learn a mid-level semantic boundary detector [36] , whereas CNN approaches only need LN locations for training [22] . In abdominal LN detection, [22] obtains the best trade-off between its CNN model complexity and sampled data configuration.…”
Section: Evaluations and Discussionmentioning
confidence: 85%
“…To facilitate comparison, we adopt the data preparation protocol of [22] , where positive and negative LN candidates are sampled with the fields-of-view (FOVs) of 30 mm to 45 mm, surrounding the annotated and detected LN centers (obtained by a candidate generation process). More precisely, [22] , [41] , [36] follow a coarse-to-fine CADe scheme, partially inspired by [42] , which operates with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\sim 100\%$\end{document} detection recalls at the cost of approximately 40 false or negative LN candidates per patient scan. In this work, positive and negative LN candidate are first sampled up to 200 times with translations and rotations.…”
Section: Datasets and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, the deep learning techniques have been introduced to the medical image analysis domain with promising results on various applications, like the computerized prognosis for Alzheimer’s disease and mild cognitive impairment 32 , organ segmentations 33 and detection 34 , ultrasound standard plane selection 35 , etc., on 3D or 4D image data, etc. In the context of CAD, most works focused on the problem of abnormality detection (CADe) 36 37 38 . For the problem of CADx, a specific convolutional neural network model, OverFeat 39 , was employed in the work 40 to classify the specific type of peri-fissural nodules with the ensemble fashion in AUC performance around 0.86.…”
Section: Deep Learning For Cadxmentioning
confidence: 99%
“…In general, the RECIST guidelines are used to evaluate lymph nodes in such patients [52]. A number of investigators have developed automated software for detection and measurement of abdominal adenopathy [5361] (Fig. 2).…”
Section: Organsmentioning
confidence: 99%