2016
DOI: 10.18608/jla.2016.32.13
|View full text |Cite
|
Sign up to set email alerts
|

Designing An Automated Assessment of Public Speaking Skills Using Multimodal Cues

Abstract: Traditional assessments of public speaking skills rely on human scoring. We report an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. Task design, rubric development, and human rating were conducted according to standards in educational assessment. An initial corpus of 17 speakers with 4 speaking tasks was collected using audio, video, and 3D motion capturing devices. A scoring model based on basic features in the speech content, sp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(13 citation statements)
references
References 27 publications
0
13
0
Order By: Relevance
“…Except from predicting affective states and performance and explaining engagement, other studies employed multimodal data for other research objectives, as well, including modeling dialogue Grafsgaard, Lester, & Boyer, 2015;Worsley, 2018), idea creation (Furuichi & Worsley, 2018) or motivational intentions (Yu et al, 2018), assessing presentation skills (Chen et al, 2016;Ochoa et al, 2018) and predicting collaborative coordination/synchrony between the collaborating peers (eg, Grafsgaard, Duran, Randall, Tao, & D'Mello, 2018;Schneider & Blikstein, 2015;Stewart, Keirn, & D'Mello, 2018;Worsley, 2014). Furthermore, substantial work has been done in the area of providing feedback using data in one or more modalities (Pardo, Poquet, Martínez-Maldonado, & Dawson, 2017).…”
Section: Related Work: Utilizing Multimodal Data To Predict Learning mentioning
confidence: 99%
“…Except from predicting affective states and performance and explaining engagement, other studies employed multimodal data for other research objectives, as well, including modeling dialogue Grafsgaard, Lester, & Boyer, 2015;Worsley, 2018), idea creation (Furuichi & Worsley, 2018) or motivational intentions (Yu et al, 2018), assessing presentation skills (Chen et al, 2016;Ochoa et al, 2018) and predicting collaborative coordination/synchrony between the collaborating peers (eg, Grafsgaard, Duran, Randall, Tao, & D'Mello, 2018;Schneider & Blikstein, 2015;Stewart, Keirn, & D'Mello, 2018;Worsley, 2014). Furthermore, substantial work has been done in the area of providing feedback using data in one or more modalities (Pardo, Poquet, Martínez-Maldonado, & Dawson, 2017).…”
Section: Related Work: Utilizing Multimodal Data To Predict Learning mentioning
confidence: 99%
“…Temporal collaborative multimodal data 1529 motivation). For example, to predict students' performance in terms of recall, quality, correctness or self-assessment, researchers have used different combinations of brain, behavioral or body signals, as well as audio cues and learning artifacts (Beardsley, Hernández-Leo, & Ramirez-Melendez, 2018;Chen et al, 2016;Di Mitri et al, 2017;Junokas, Lindgren, Kang, & Morphew, 2018;Spikol, Ruffaldi, Dabisias, & Cukurova, 2018). Combinations of different data streams yielded accurate predictions for a range of performance measures.…”
Section: Practitioner Notesmentioning
confidence: 99%
“…As can be inferred from the above examples, the data modalities used in individual and collaborative MMLA range greatly with log data, clickstreams, audio, video, dialogs, facial expressions, gestures, posture, motion, gaze and biological being most common (Beardsley et al, 2018;Chen et al, 2016;Junokas et al, 2018;Liu & Stamper, 2017;Mattingly et al, 2019;Smith et al, 2016;Spikol et al, 2018). From these data streams, a wide range of higher level features can be inferred including affect, attention, cognitive processing, stress and fatigue.…”
Section: Practitioner Notesmentioning
confidence: 99%
“…Considering assessment duties, diverse issues should also be taken into account such as individual accountability, time, validity, and reliability. Concerning speaking skills, they are one of the complex assessment targets that claims novel solutions, such as the Scoring Model based on Multimodal Data (e.g., speech delivery and basic features of speech content, hand, body, and head movements) that significantly predicts human rating (Chen et al, ).…”
Section: Applicationsmentioning
confidence: 99%