2019
DOI: 10.1007/978-3-030-32245-8_8
|View full text |Cite
|
Sign up to set email alerts
|

Extreme Points Derived Confidence Map as a Cue for Class-Agnostic Interactive Segmentation Using Deep Neural Network

Abstract: To automate the process of segmenting an anatomy of interest, we can learn a model from previously annotated data. The learningbased approach uses annotations to train a model that tries to emulate the expert labeling on a new data set. While tremendous progress has been made using such approaches, labeling of medical images remains a time-consuming and expensive task. In this paper, we evaluate the utility of extreme points in learning to segment. Specifically, we propose a novel approach to compute a confide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…In Sakinis et al [23], the authors use the clicks as Gaussian kernels and put them in a separate input channel to an FCN to model user interactions via seed-point placing. Khan et al [24] extends the Gaussian kernel idea to a confidence map derived from extreme points that quantitatively encodes some priors. Majumder and Yao [25] transforms the positive and negative clicks into images based on superpixel and object proposals, so that image information can be used with clicks to generate a guidance map.…”
Section: Interactive Segmentationmentioning
confidence: 99%
“…In Sakinis et al [23], the authors use the clicks as Gaussian kernels and put them in a separate input channel to an FCN to model user interactions via seed-point placing. Khan et al [24] extends the Gaussian kernel idea to a confidence map derived from extreme points that quantitatively encodes some priors. Majumder and Yao [25] transforms the positive and negative clicks into images based on superpixel and object proposals, so that image information can be used with clicks to generate a guidance map.…”
Section: Interactive Segmentationmentioning
confidence: 99%
“…Interactive Segmentation Interactive segmentation allow user to make edits by clicks (Xu et al 2016), scribbles (Grady et al 2005), bounding boxes (Rajchl et al 2016;Castrejon et al 2017), or extreme points (Maninis et al 2018;Khan et al 2019). Interactive segmentation models need to be pre-trained using a hold-out labeled dataset to make initial predictions before user interactions.…”
Section: Related Workmentioning
confidence: 99%
“…In Sakinis et al (2019), the authors utilize the clicks as Gaussian kernels and put them in a separate input channel to an FCN to model user interactions via seed-point placing. Khan et al (2019) extends the Gaussian kernel idea to a confidence map derived from extreme points that quantitatively encodes some priors. Majumder and Yao (2019) transforms the positive and negative clicks into images based on superpixel and object proposals, so that image information can be utilized with clicks to generate a guidance map.…”
Section: Interactive Segmentationmentioning
confidence: 99%