Proceedings of the 13th ACM Conference on Recommender Systems 2019
DOI: 10.1145/3298689.3347002
|View full text |Cite
|
Sign up to set email alerts
|

Addressing delayed feedback for continuous training with neural networks in CTR prediction

Abstract: One of the challenges in display advertising is that the distribution of features and click through rate (CTR) can exhibit large shifts over time due to seasonality, changes to ad campaigns and other factors. The predominant strategy to keep up with these shifts is to train predictive models continuously, on fresh data, in order to prevent them from becoming stale. However, in many ad systems positive labels are only observed after a possibly long and random delay. These delayed labels pose a challenge to data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
16
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(18 citation statements)
references
References 20 publications
2
16
0
Order By: Relevance
“…Second, in most cases, FNC and FNW perform better than the vanilla baseline. Specially, FNW outperforms the baseline in both PR-AUC and NLL, which is consistent with the results reported in Ktena et al (2019). Third, existing methods show little superior performance in terms of AUC, while our method outperform the best baseline of 0.26% and 0.44% AUC scores on the public and anonymous dataset, respectively.…”
Section: Standard Streaming Experiments: Rq1supporting
confidence: 88%
See 1 more Smart Citation
“…Second, in most cases, FNC and FNW perform better than the vanilla baseline. Specially, FNW outperforms the baseline in both PR-AUC and NLL, which is consistent with the results reported in Ktena et al (2019). Third, existing methods show little superior performance in terms of AUC, while our method outperform the best baseline of 0.26% and 0.44% AUC scores on the public and anonymous dataset, respectively.…”
Section: Standard Streaming Experiments: Rq1supporting
confidence: 88%
“…In order to capture the dynamic change of user needs, commercial systems often update learned models with update-to-date data within a short time, i.e., in an online training manner (Jugovac, Jannach, and Karimi 2018;Guo et al 2019;Ktena et al 2019). This further complicates the CVR prediction since conversions usually do not happen immediately after a user click.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, authors in [63] have centered their work on delayed positive feedback at stream media to study the effect of two factors, such as the trend and seasonality, in online advertising. In live streams, the predicting models are dealt with the cold start issue.…”
Section: Stream Based Frameworkmentioning
confidence: 99%
“…Supervised learning procedures, such as collaborative filtering (Schafer et al 2007) and content-based filtering (Blanda 2016) that form the cornerstone of RSs, are typically used to model the probability that each piece of content will immediately engage a given user. However, those methods are inadequate when optimizing for delayed user feedback (Joulani, Gyorgy, and Szepesvári 2013;Ktena et al 2019). In particular, the value of a recommendation can become evident in later interactions with a user, rather than through immediate engagement.…”
Section: Introductionmentioning
confidence: 99%