2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00172
|View full text |Cite
|
Sign up to set email alerts
|

Attentive Feedback Network for Boundary-Aware Salient Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
283
0
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 443 publications
(284 citation statements)
references
References 19 publications
0
283
0
1
Order By: Relevance
“…Recent deep-learning SOD models (MINet[ 161 ], SACNet[ 187 ], GateNet [ 166 ], [ 193 ], LDF [ 148 ], DSRNet [ 164 ], EGNet [ 199 ], PoolNet [ 183 ], AFNet [ 177 ], MLMS [ 146 ], PAGE [ 44 ], CPD [ 173 ], BDPM [ 159 ], JDF [ 186 ], RAS [ 160 ], PAGR [ 180 ], C2S-Net [ 209 ], PiCANet [ 181 ], DSS [ 167 ], UCF [ 203 ], MSRNet [ 157 ], ILS [ 174 ], NLDF [ 15 ], AMULet [ 171 ], SCRN [ 162 ], BANet [ 194 ], BASNet [ 184 ], CapSal [ 147 ], DGRL [ 182 ], SRM [ 205 ]) are quantitatively evaluated using four evaluation metrics on five SOD datasets (DUTS-TE [ 174 ], DUT-OMRON [ 110 ], HKU-IS [ 154 ], ECSSD [ 103 ], Pascal-S [ 158 ]). The evaluation metrics used are maximum F-measure ( ) [ 14 ], S-measure [ 224 ], E-measure [ 225 ], and mean average error (MAE) [ 106 ].…”
Section: Datasets Evaluation and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent deep-learning SOD models (MINet[ 161 ], SACNet[ 187 ], GateNet [ 166 ], [ 193 ], LDF [ 148 ], DSRNet [ 164 ], EGNet [ 199 ], PoolNet [ 183 ], AFNet [ 177 ], MLMS [ 146 ], PAGE [ 44 ], CPD [ 173 ], BDPM [ 159 ], JDF [ 186 ], RAS [ 160 ], PAGR [ 180 ], C2S-Net [ 209 ], PiCANet [ 181 ], DSS [ 167 ], UCF [ 203 ], MSRNet [ 157 ], ILS [ 174 ], NLDF [ 15 ], AMULet [ 171 ], SCRN [ 162 ], BANet [ 194 ], BASNet [ 184 ], CapSal [ 147 ], DGRL [ 182 ], SRM [ 205 ]) are quantitatively evaluated using four evaluation metrics on five SOD datasets (DUTS-TE [ 174 ], DUT-OMRON [ 110 ], HKU-IS [ 154 ], ECSSD [ 103 ], Pascal-S [ 158 ]). The evaluation metrics used are maximum F-measure ( ) [ 14 ], S-measure [ 224 ], E-measure [ 225 ], and mean average error (MAE) [ 106 ].…”
Section: Datasets Evaluation and Discussionmentioning
confidence: 99%
“…In Reference [ 177 ], Feng et al performed two consecutive refinements of saliency features at every scale using the attentive feedback modules (AFM). Firstly, an initial coarse saliency map is computed to be fairly rich in spatial details along with semantics ( Figure 5 e).…”
Section: Deep Learning-based Salient Object Detectionmentioning
confidence: 99%
“…We compare our method with 16 previous state-of-the-art methods, namely MDF [28], RFCN [18], UCF [20], Amulet [13], NLDF [12], DSS [31], BMPM [21], PAGR [50], PiCANet [51], SRM [16], DGRL [32], MLMS [52], AFNet [53], CapSal [54], BASNet [15], and CPD [55]. For a fair comparison, we use the saliency maps provided by the authors.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 99%
“…We compare the proposed saliency detection method against previous 18 state-of-the-art methods, namely, MDF [13], RFCN [31], DHS [32], UCF [46], Amulet [34], NLDF [47], DSS [48], RAS [49], BMPM [33], PAGR [50], PiCANet [51], SRM [18], DGRL [17], MLMS [52], AFNet [53], CapSal [54], BASNet [55], and CPD [16]. We perform comparisons on five challenging datasets.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 99%