This paper presents the MUST-VIS system for the MediaMixer/VideoLectures.NET Temporal Segmentation and Annotation Grand Challenge. The system allows users to visualize a lecture as a series of segments represented by keyword clouds, with relations to other similar lectures and segments. Segmentation is performed using a multi-factor algorithm which takes advantage of the audio (through automatic speech recognition and word-based segmentation) and video (through the detection of actions such as writing on the blackboard). The similarity across segments and lectures is computed using a content-based recommendation algorithm. Overall, the graph-based representation of segment similarity appears to be a promising and cost-effective approach to navigating lecture databases.
Chatbots have long been advocated for computer-assisted language learning systems to support learners with conversational practice. A particular challenge in such systems is explaining mistakes stemming from ambiguous grammatical constructs. Misplaced modifiers, for instance, do not make sentences ungrammatical, but introduce ambiguity through the misplacement of an adverb or prepositional phrase. In certain cases, the ambiguity gives rise to humor, which can serve to illustrate the mistake itself. We conducted an online experiment with 400 native English speakers to explore the use of a chatbot to harness such humor. In an interaction resembling an advanced grammar exercise, the chatbot presented participants with a phrase containing a misplaced modifier, explained the ambiguity in the phrase, acknowledged (or ignored) the humor that the ambiguity gave rise to, and suggested a correction. Participants then completed a questionnaire, rating the chatbot with respect to ten traits. A quantitative analysis showed a significant increase in how participants rated the chatbot's personality, humor, and friendliness when it acknowledged the humor arising from the misplaced modifier. This effect was observed whether the acknowledgment was conveyed using verbal, nonverbal (emoji), or mixed cues.
Peer code review has proven to be a valuable tool in software engineering. However, integrating code reviews into educational contexts is particularly challenging due to the complexity of both the process and popular code review tools. We propose to address this challenge by designing a code review application (CRA) aimed at teaching the code review process directly within existing online learning platforms. Using the CRA, instructors can scaffold online lessons that introduce the code review process to students through code snippets, following a format resembling computational notebooks. We refer to this online lesson format as the code review notebook format. Through a case study comprising an online lesson on code quality standards completed by 23 university students, we evaluated the usability of the CRA and the code review notebook format, obtaining positive results for both. These results are a first step toward integrating code review notebooks into software engineering education.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.