2016
DOI: 10.1109/tcsvt.2014.2347551
|View full text |Cite
|
Sign up to set email alerts
|

Effective Multimodality Fusion Framework for Cross-Media Topic Detection

Abstract: Abstract-Due to the prevalence of "We-Media", everybody quickly publishes and receives information in various forms anywhere and anytime through the Internet. The rich crossmedia information carried by the multi-modal data in multiple media has a wide audience, deeply reflects the social realities and brings about much greater social impact than any single media information. Therefore, automatically detecting topics from crossmedia is of great benefit for the organizations (i.e., advertising agencies, governme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(11 citation statements)
references
References 42 publications
0
11
0
Order By: Relevance
“…Based on Markov random field, this model learns a set of shared topics across the modalities. Chu et al [12] developed a flexible multimodality graph (MMG) fusion framework to fuse the complex multi-modal data from different media and a topic recovery approach to effectively detect topics from cross-media data.…”
Section: Multi-modal and Cross-modal Retrievalmentioning
confidence: 99%
“…Based on Markov random field, this model learns a set of shared topics across the modalities. Chu et al [12] developed a flexible multimodality graph (MMG) fusion framework to fuse the complex multi-modal data from different media and a topic recovery approach to effectively detect topics from cross-media data.…”
Section: Multi-modal and Cross-modal Retrievalmentioning
confidence: 99%
“…In this section, we describe several works related to our work. Some works on the detection of content groups from different social media platforms [2,9,39,40,44,45] are strongly related to the strategy for solving problem 1 (see Section 1). The content groups detected in those works can provide complementary information delivered by multiple platforms, which is much richer information than the information from a single platform.…”
Section: Related Workmentioning
confidence: 99%
“…For example, mining of semantic relationships among multi-modal contents obtained from different social media platforms by fusing two uni-modal graphs, i.e. text and visual graphs, with upload-time similarities [9,45] and user behavior information [39] was performed. In another work, information on hot search queries was used as guidance to calculate similarities between contents of different platforms [40].…”
Section: Related Workmentioning
confidence: 99%
“…Document [4] proposed an unsupervised method called convolutional cross Autoencoder for cross-modality element-level feature learning, which can capture the cross-modality correlations in element samples of social media datasets. The reference [5] proposed a multi-modality fusion framework and a topic recovery approach to effectively detect topics from cross-media data. Reference [6] proposed a modality-dependent cross-media retrieval model, where two couples of projections are learned for different CMR tasks instead of one a couple of projections.…”
Section: Introductionmentioning
confidence: 99%