Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2022
DOI: 10.1145/3477495.3531723
|View full text |Cite
|
Sign up to set email alerts
|

Bars

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(16 citation statements)
references
References 77 publications
0
15
0
Order By: Relevance
“…Since the datasets partition and evaluation metrics of all compared baselines are consistent with our method, we directly present the results reported in them, we can ensure that we present the optimal performance of various methods on the premise of fair comparison. The experimental setting of all baselines can be found at https://openbenchmark.github.io/BARS [38].…”
Section: Performance Comparisonmentioning
confidence: 99%
“…Since the datasets partition and evaluation metrics of all compared baselines are consistent with our method, we directly present the results reported in them, we can ensure that we present the optimal performance of various methods on the premise of fair comparison. The experimental setting of all baselines can be found at https://openbenchmark.github.io/BARS [38].…”
Section: Performance Comparisonmentioning
confidence: 99%
“…To begin, we train a full model with a fixed field dimension of 16, as suggested by previous works [48,74,75]. This process allows us to generate an informative embedding table for each field.…”
Section: Field-level Dimension Optimizationmentioning
confidence: 99%
“…Implementation details. We implement all models with Pytorch [40] and refer to existing works [8,74,75]. We use Adam [27] optimizer to optimize all models, and the default learning rate is 0.001.…”
Section: Experimental Analysis 51 Experiments Setupmentioning
confidence: 99%
See 2 more Smart Citations