Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining 2018
DOI: 10.1145/3159652.3159691
|View full text |Cite
|
Sign up to set email alerts
|

User Profiling through Deep Multimodal Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0
2

Year Published

2018
2018
2019
2019

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 96 publications
(35 citation statements)
references
References 24 publications
0
29
0
2
Order By: Relevance
“…For example, Tong et al (2017) apply an outer product fusion method to combine text and photo information for the task of detecting human trafficking. For the task of user profiling, formulated as a multi-tasking classification problem, Vijayaraghavan et al (2017) propose a hierarchical attention model; and Farnadi et al (2018) propose the UDMF framework, a hybrid integration model that combines both early feature fusion and later decision fusion using both stacking and power-set combination. Zhong et al (2016) also studied the combination of image and captions for the task of detecting cyberbullying.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Tong et al (2017) apply an outer product fusion method to combine text and photo information for the task of detecting human trafficking. For the task of user profiling, formulated as a multi-tasking classification problem, Vijayaraghavan et al (2017) propose a hierarchical attention model; and Farnadi et al (2018) propose the UDMF framework, a hybrid integration model that combines both early feature fusion and later decision fusion using both stacking and power-set combination. Zhong et al (2016) also studied the combination of image and captions for the task of detecting cyberbullying.…”
Section: Introductionmentioning
confidence: 99%
“…Here we briefly review related works in this direction. There is a large amount of studies on integrating multimodal data sources in deep neural networks, including recommendation [7,11,44], multimodal retrieval [14,23], and user profiling [12], image captioning [10,19]. The flexibility of deep architecture advances the implementation of multimodal fusion either as feature-level fusion or decision-level fusion [31].…”
Section: Deep Multimodal Fusionmentioning
confidence: 99%
“…They use crowdsourcing to annotate user profiles and train log-linear models using lexical features. Farnadi et al [4] merge multiple modalities of user data, such as text, images and relations, to predict age, gender, and personality. They build a hybrid user profiling framework which utilizes a shared representation between modalities to integrate multiple sources of data at feature level and decision level.…”
Section: User Profilingmentioning
confidence: 99%