2022
DOI: 10.48550/arxiv.2205.03297
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Implicit semantic-based personalized micro-videos recommendation

Abstract: With the rapid development of mobile Internet and big data, a huge amount of data is generated in the network, but the data that users are really interested in a very small portion. To extract the information that users are interested in from the huge amount of data, the information overload problem needs to be solved.In the era of mobile internet, the user's characteristics and other information should be combined in the massive amount of data to quickly and accurately recommend content to the user, as far as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…[32], [7] Coarse-grained Attention None [21], [26] Fine-gained Attention None [24] Fine-gained Attention CL [47], [51] User-item Graph None [54] User-item Graph CL [29] Knowledge Graph + Fine-gained Attention None…”
Section: Videomentioning
confidence: 99%
See 2 more Smart Citations
“…[32], [7] Coarse-grained Attention None [21], [26] Fine-gained Attention None [24] Fine-gained Attention CL [47], [51] User-item Graph None [54] User-item Graph CL [29] Knowledge Graph + Fine-gained Attention None…”
Section: Videomentioning
confidence: 99%
“…VECF [6] performs image segmentation to integrate image features of each patch with other modalities. UVCAN [26] makes image segmentation of video screenshots like VECF and fuse image patches with id information and text information through the attention mechanism, respectively. MM-Rec [52] first extracts the region of interest from the image of news through the target detection algorithm Mask-RCNN, and then fuses POI with news content using co-attention.…”
Section: 22mentioning
confidence: 99%
See 1 more Smart Citation