Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2020
DOI: 10.1145/3394486.3403384
|View full text |Cite
|
Sign up to set email alerts
|

Jointly Learning to Recommend and Advertise

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 65 publications
(32 citation statements)
references
References 21 publications
0
32
0
Order By: Relevance
“…There are some studies targeting recommendation and advertising simultaneously in e-commerce environments [73,123,129]. Pei et al [73] mentions when deploying RS into real-world platforms such as e-commerce scenarios, the expectation is to improve the profit of the system.…”
Section: Model-free Deep Reinforcement Learning Based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…There are some studies targeting recommendation and advertising simultaneously in e-commerce environments [73,123,129]. Pei et al [73] mentions when deploying RS into real-world platforms such as e-commerce scenarios, the expectation is to improve the profit of the system.…”
Section: Model-free Deep Reinforcement Learning Based Methodsmentioning
confidence: 99%
“…A new metric, Gross Merchandise Volume (GMV), is proposed to measure the profitability of the RS to provide a new view about evaluating RS in advertising. Different from GMV, Zhao et al [129] separates recommendation and advertising as two different tasks and proposes the Rec/Ads Mixed display (RAM) framework. RAM designs two agents: a recommendation agent and an advertising agent, where each agent employs a CDQN to conduct the corresponding task.…”
Section: Model-free Deep Reinforcement Learning Based Methodsmentioning
confidence: 99%
“…RL-based methods model the ads allocation problem as an MDP and solved it with different RL techniques. Zhao et al (2020) proposes a two-level RL framework to jointly optimize the recommending and advertising strategies. Zhao et al (2021) proposes a DQN architecture to determine the optimal ads and ads position jointly.…”
Section: Related Workmentioning
confidence: 99%
“…Early dynamic slots strategies use some classic algorithms (e.g., Bellman-Ford, unified rank score) to allocate ads slots. Since the feed is presented to the user in a sequence, recent dynamic ads allocation strategies usually model the problem as Markov Decision Process (Sutton, Barto et al 1998) and solve it using reinforcement learning (RL) (Zhang et al 2018;Feng et al 2018;Zhao et al 2020).…”
Section: Introductionmentioning
confidence: 99%
“…And until present, most methods use only vanilla ER, which uniformly samples experiences from the replay buffer. Among them, Zhao et al [39] apply DQN to online recommendation and RNN to generate state embeddings; Chen et al [4] point out that DQN receives unstable rewards in dynamic environments such as online recommendation and may harm the agent; Chen et al [3] found that traditional methods like DQN become intractable when the state becomes higher-dimensional; DPG addresses the intractability by mapping high-dimensional discrete state into low-dimensional continuous state [5,36].…”
Section: Related Workmentioning
confidence: 99%