2021
DOI: 10.1007/978-3-030-75765-6_41
|View full text |Cite
|
Sign up to set email alerts
|

Transformer-Based Multi-task Learning for Queuing Time Aware Next POI Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 16 publications
0
16
0
Order By: Relevance
“…This work is an extension of our previous conference paper (Halder et al 2021). Here we extend POI description based personalised user interest impacts on queuing time prediction and next POI recommendation simultaneously.…”
Section: Introductionmentioning
confidence: 66%
See 4 more Smart Citations
“…This work is an extension of our previous conference paper (Halder et al 2021). Here we extend POI description based personalised user interest impacts on queuing time prediction and next POI recommendation simultaneously.…”
Section: Introductionmentioning
confidence: 66%
“…None of these studies used multi-tasking in POI recommendations. Halder et al (Halder et al 2021) proposed a transformer-based multi-task learning model for the next top-k POI recommendation and predicted queue time. This model can not capture user interests appropriately and can not solve the new POI cold start problem.…”
Section: Transformer and Multi-task Learningmentioning
confidence: 99%
See 3 more Smart Citations