2018
DOI: 10.48550/arxiv.1802.07931
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Where's YOUR focus: Personalized Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…In practice however, Code available at https://github.com/nabsabraham/focal-tversky-unet it faces difficulty balancing precision and recall due to small regions-of-interest (ROI) found in medical images. Research efforts to address small ROI segmentation propose more discriminative models such as attention gated networks [5], [6]. CNNs with attention gates (AGs) focus on the target region, with respect to the classification goal, and can be trained endto-end.…”
Section: Introductionmentioning
confidence: 99%
“…In practice however, Code available at https://github.com/nabsabraham/focal-tversky-unet it faces difficulty balancing precision and recall due to small regions-of-interest (ROI) found in medical images. Research efforts to address small ROI segmentation propose more discriminative models such as attention gated networks [5], [6]. CNNs with attention gates (AGs) focus on the target region, with respect to the classification goal, and can be trained endto-end.…”
Section: Introductionmentioning
confidence: 99%
“…However, the different persons actually focus on different areas even when they gaze at the same scene, that is, individual differences exist [8][9][10]. To model the individual visual attention, the personzalization of the salinacy map has been addressed over the past few years [11][12][13]. For distinguishing between a traditional saliency map and its personalisation, we call a universal saliency map (USM) and a personalized saliency map (PSM), respectively.…”
Section: Introductionmentioning
confidence: 99%
“…Then the gaze patterns emerging in images are quite complex and individually different, and those characteristics lead to the difficulty of the PSM prediction. For extracting the gaze patterns and tendencies, several researchers have collected eye-tracking data for thousands of images [11,12,14]. Moreover, in these researches, the simultaneous prediction of PSMs for several persons has been tried by using a multi-task convolutional neural network (multi-task CNN) [19] to compensate for the lack of data [12].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations