2019
DOI: 10.1016/j.eswa.2019.05.040
|View full text |Cite
|
Sign up to set email alerts
|

A fuzzy rule based multimodal framework for face sketch-to-photo retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(7 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…[73] employs a generative representation method with extreme learning machines for cross-modal classification. [27] have fused the facial attributes of a person and the semantic color information using a fuzzy rule based layered classifier. [8] presents an Attribute-Image Hierarchical Matching model for text attribute description based person search without any query imagery.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…[73] employs a generative representation method with extreme learning machines for cross-modal classification. [27] have fused the facial attributes of a person and the semantic color information using a fuzzy rule based layered classifier. [8] presents an Attribute-Image Hierarchical Matching model for text attribute description based person search without any query imagery.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-modal information retrieval takes queries in one modality of data to retrieve relevant data from other modalities, augmenting information from a single source with information from other sources as in the cases of video-text matching [36,59], image-text matching [10], situational knowledge delivery [8,27,41], etc. The main challenge in cross-modal retrieval lies in the heterogeneity gap between different modalities [18,43,62].…”
Section: Introductionmentioning
confidence: 99%
“…Data fusion among multiple modalities has been employed in many application domains, such as sentiment analysis, 17 image-text matching, 14 face retrieval, 8 and visual question answering for a better understanding of context. These approaches have performed well for their respective application domains, but they lack generalization capabilities.…”
Section: Cross-modal Matching and Correlation Learningmentioning
confidence: 99%
“…Data fusion among multiple modalities has been used in many application domains such as sentiment analysis [17], image-text matching [14], face retrieval [8], and visual questionanswering for a better understanding of context. These approaches have performed well for respective application domains, but they lack generalization capabilities.…”
Section: Related Workmentioning
confidence: 99%