“…Most recent works in this domain include [151][152][153]. In [151] a user-item interaction graph is formulated and TransGRec is proposed; the latter is an inductive graph-based transfer learning framework for personalised video highlight recommendation.…”
Section: Transforming the Contentmentioning
confidence: 99%
“…In [151] a user-item interaction graph is formulated and TransGRec is proposed; the latter is an inductive graph-based transfer learning framework for personalised video highlight recommendation. [152] explores the cross-category video highlight detection problem through learning two types of knowledge about highlight moments and applying it to the target video category, while [153] utilises multi-modal information by including content-agnostic audio-visual synchrony representations and mel-frequency cepstral coefficients to capture other intrinsic properties of audio.…”
This version of the article has been accepted for publication, after peer review but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections.
“…Most recent works in this domain include [151][152][153]. In [151] a user-item interaction graph is formulated and TransGRec is proposed; the latter is an inductive graph-based transfer learning framework for personalised video highlight recommendation.…”
Section: Transforming the Contentmentioning
confidence: 99%
“…In [151] a user-item interaction graph is formulated and TransGRec is proposed; the latter is an inductive graph-based transfer learning framework for personalised video highlight recommendation. [152] explores the cross-category video highlight detection problem through learning two types of knowledge about highlight moments and applying it to the target video category, while [153] utilises multi-modal information by including content-agnostic audio-visual synchrony representations and mel-frequency cepstral coefficients to capture other intrinsic properties of audio.…”
This version of the article has been accepted for publication, after peer review but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.