2023
DOI: 10.1109/access.2023.3245982
|View full text |Cite
|
Sign up to set email alerts
|

CLEFT: Contextualised Unified Learning of User Engagement in Video Lectures With Feedback

Abstract: Predicting contextualised engagement in videos is a long-standing problem that has been popularly attempted by exploiting the number of views or likes using different computational methods. The recent decade has seen a boom in online learning resources, and during the pandemic, there has been an exponential rise of online teaching videos without much quality control. As a result, we are faced with two key challenges. First, how to decide which lecture videos are engaging to intrigue the listener and increase p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 43 publications
0
2
0
Order By: Relevance
“…Researchers also conducted a video engagement prediction task using machine learning models based on this data set. Roy et al (2023) proposed the CLEFT framework for predicting engagement, which provides interpretive feedback and user engagement ratings for the videos. The unified framework uses various pretraining models as components, and each submodel is constructed using different modal features, such as language features, text features and video emotion features.…”
Section: Context-agnostic Engagementmentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers also conducted a video engagement prediction task using machine learning models based on this data set. Roy et al (2023) proposed the CLEFT framework for predicting engagement, which provides interpretive feedback and user engagement ratings for the videos. The unified framework uses various pretraining models as components, and each submodel is constructed using different modal features, such as language features, text features and video emotion features.…”
Section: Context-agnostic Engagementmentioning
confidence: 99%
“…No one has considered the dynamic features related to the videos yet. In addition, the latest research (Roy et al , 2023) also suggests that multimodal features are another direction that can be explored for video engagement prediction.…”
Section: Related Workmentioning
confidence: 99%