2016
DOI: 10.1016/j.knosys.2016.05.022
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging multimodal information for event summarization and concept-level sentiment analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 68 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…Comprehensive reviews on personalized tag recommendation and ranking can be found in a textbook [40]. In this way, multimodal analysis [32,33] using various types of features has improved the performance of applications such as recommendation and ranking [34,37,39,40] as well as basic theories including temporal segmentation [36] and event recognition [35,38].…”
Section: Related Workmentioning
confidence: 99%
“…Comprehensive reviews on personalized tag recommendation and ranking can be found in a textbook [40]. In this way, multimodal analysis [32,33] using various types of features has improved the performance of applications such as recommendation and ranking [34,37,39,40] as well as basic theories including temporal segmentation [36] and event recognition [35,38].…”
Section: Related Workmentioning
confidence: 99%
“…This approach is compared with baseline algorithms that determine positive, negative and neutral opinion. It is a real and complex problem in order to extract noise and maintain the semantic and sentics from media, leveraging multi-model framework [7]. The proposed approach focuses on event summarization and concept-level sentimental analysis.…”
Section: Dictionary Based Approachmentioning
confidence: 99%
“…Researches in a single-modal task include, e.g., using deep networks for scene recognition [4], using deeper networks to achieve better performance in image classification [5], [6], generalizing features extracted from a specific dataset in a fully supervised fashion for generic tasks [7]. Some works investigated multiple modalities, e.g., event summarization by leveraging both visual and textual information [8].…”
Section: Introductionmentioning
confidence: 99%
“…Researches in a single-modal task include, e.g., using deep networks for scene recognition [4], using deeper networks to achieve better performance in image classification [5], [6], generalizing features extracted from a specific dataset in a fully supervised fashion for generic tasks [7]. Some works investigated multiple modalities, e.g., event summarization by leveraging both visual and textual information [8]. Recently, there are some initial work on cross-modal correlation, such as predicting answers given an image and a question as input [9], analyzing pairwise correlation between images and their captions [10], cross-modal retrieval by canonical correlation analysis (CCA) [11] or kernel CCA (KCCA) [12].…”
Section: Introductionmentioning
confidence: 99%