2020
DOI: 10.1016/j.neunet.2020.07.011
|View full text |Cite
|
Sign up to set email alerts
|

Discretely-constrained deep network for weakly supervised segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 37 publications
(17 citation statements)
references
References 25 publications
0
17
0
Order By: Relevance
“…The incorporation of prior knowledge in deep learning methods is attracting more and more attention for improving performance of deep learning models. Some prior knowledge which act as constrained item have been proposed to integrate into loss functions [44], [45]. The participation of proper prior knowledge often bring performance improvement for models.…”
Section: Discussionmentioning
confidence: 99%
“…The incorporation of prior knowledge in deep learning methods is attracting more and more attention for improving performance of deep learning models. Some prior knowledge which act as constrained item have been proposed to integrate into loss functions [44], [45]. The participation of proper prior knowledge often bring performance improvement for models.…”
Section: Discussionmentioning
confidence: 99%
“…Nevertheless, their formulation is limited to linear constraints. More recently, inequality constraints have been tackled by augmenting the learning objective with a penalty-based function, e.g., L 2 penalty, which can be imposed within a continuous optimization framework [5,18,19], or in the discrete domain [28]. Despite these methods have demonstrated remarkable performance in weakly supervised segmentation, they require that prior knowledge, exact or approximate, is given.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, following the literature in weakly supervised segmentation, we can obtain the corresponding image class activation maps (CAMs 1 ) from samples in D, resulting in the set 12 represents the max-normalized CAM of the i th sample and modality m k .…”
Section: Let Us Denote a Set Of N Training Images Asmentioning
confidence: 99%
“…[6], bounding boxes [7], [8] or global target information, such as object size [9]- [12]. A common strategy is to use imagelevel labels to derive pixel-wise class activation maps (CAMs) [13], which serve to identify object regions in the image.…”
Section: Introductionmentioning
confidence: 99%