2019
DOI: 10.1007/s11063-019-10027-7
|View full text |Cite
|
Sign up to set email alerts
|

Visual Sentiment Analysis by Combining Global and Local Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(32 citation statements)
references
References 25 publications
0
32
0
Order By: Relevance
“…Deep learning is by all means contributing to making polarity detectors very reliable. A very interesting research direction is the development of models capable of locating the most important regions in an image, to exploit saliency to improve sentiment analysis [48,[54][55][56][57][58]. Up to now, multi-task learning has received little attention, although several applications exist that require solving multiple visual tasks.…”
Section: Concluding Remarks and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning is by all means contributing to making polarity detectors very reliable. A very interesting research direction is the development of models capable of locating the most important regions in an image, to exploit saliency to improve sentiment analysis [48,[54][55][56][57][58]. Up to now, multi-task learning has received little attention, although several applications exist that require solving multiple visual tasks.…”
Section: Concluding Remarks and Discussionmentioning
confidence: 99%
“…In [48], the Vgg_19 architecture processed an image including the conventional R, G, and B channels and a focal channel; the latter channel was set to model human attention. Likewise, the works in [54,55] studied the relevance of salience in sentiment detection. Attention mechanisms [56,57] were addressed in [58], where the authors combined VggNet architectures with a recurrent neural network (RNN).…”
Section: Polarity Detectionmentioning
confidence: 99%
“…GM EI &LRM SI [ 27 ]: The work found through research that not all images in the dataset contain salient objects. Therefore, this work believes that visual sentiment analysis should not only focus on local features.…”
Section: Methodsmentioning
confidence: 99%
“…On the basis of NASNet, Yadav et al applied the residual attention module to learn the important areas related to emotion in the image [ 26 ]. Wu et al suggested to use the object detection module to determine if use a local module [ 27 ]. Yang et al [ 28 ] proposed a method to finding related regions using a ready-made method to get object proposals as local emotional information and used VGG to learn global information.…”
Section: Related Workmentioning
confidence: 99%
“…The system classifies a givne image using the eight emotions scheme proposed by Mikels et al [31]. The authors of [64] combined the use of CNNs and saliency detection to develop a system that first performs a prediction of the sentiment of the whole image. Then, if salient regions are detected, performs the sentiment prediction on the sub‐images that depict such sub‐areas.…”
Section: State‐of‐the‐artmentioning
confidence: 99%