2020
DOI: 10.1007/978-3-030-45442-5_20
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Learning Methods for Diversification of Image Search Results

Abstract: We adopt a supervised learning framework, namely R-LTR [17], to diversify image search results, and extend it in various ways. Our experiments show that the adopted and proposed variants are superior to two well-known baselines, with relative gains up to 11.4%.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…Tables 5 and 6 provide the findings for the approaches based on the supervised R‐LTR method (to facilitate comparisons, the results for the art_xQuAD is repeated). Our results reveal that (a) our multidimensional R‐LTR exp approach (using Adaptive instantiation) with explicit aspects outperforms the baseline R‐LTR imp (which is an implicit diversification method), (b) our implementation of the multidimensional R‐LTR approach using a two‐layer neural network (as in Goynuk and Altingovde [2020]) further improves the performance (since R‐LTR expNN outperforms R‐LTR exp ), and (c) multidimensional R‐LTR expNN yields the best performance only for the ST‐Recall metric, while multidimensional art_xQuAD performs better for the remaining metrics.…”
Section: Experimental Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Tables 5 and 6 provide the findings for the approaches based on the supervised R‐LTR method (to facilitate comparisons, the results for the art_xQuAD is repeated). Our results reveal that (a) our multidimensional R‐LTR exp approach (using Adaptive instantiation) with explicit aspects outperforms the baseline R‐LTR imp (which is an implicit diversification method), (b) our implementation of the multidimensional R‐LTR approach using a two‐layer neural network (as in Goynuk and Altingovde [2020]) further improves the performance (since R‐LTR expNN outperforms R‐LTR exp ), and (c) multidimensional R‐LTR expNN yields the best performance only for the ST‐Recall metric, while multidimensional art_xQuAD performs better for the remaining metrics.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…Finally, in Goynuk and Altingovde (2020), R‐LTR was implemented using a neural network framework, which allows a nonlinear formulation and the training of more complex models (i.e., via multiple hidden layers). Similarly, we apply this approach for training a model based on the same input as Equation ) (denoted by R‐LTR expNN ).…”
Section: Diversification Across Dimensionsmentioning
confidence: 99%