Chatbots are becoming a ubiquitous trend in many fields such as medicine, product and service industry, and education. Chatbots are computer programs used to conduct auditory or textual conversations. A growing body of evidence suggests that these programs have the potential to change the way students learn and search for information. Especially in large-scale learning scenarios with more than 100 students per lecturer, chatbots are able to solve the problem of individual student support. However, until now, there has been no systematic, structured overview of their use in education. The aim of this paper is therefore to conduct a systematic literature review based on a multi-perspective framework, from which we have derived initial search questions, synthesized past research, and highlighted future research directions. We reviewed titles and abstracts of 1405 articles drawn from management, education, information systems, and psychology literature before examining and individually coding a relevant subset of 80 articles.The results show that chatbots are in the very beginning of entering education. Few studies suggest the potential of chatbots for improving learning processes and outcomes. Nevertheless, past research has revealed that the effectiveness of chatbots in education is complex and depends on a variety of factors. With our literature review, we make two principal contributions: first, we structure and synthesize past research by using an input-process-output framework, and secondly, we use the framework to highlight research gaps for guiding future research in that area.
Enrollment in online courses has sharply increased in higher education. Although online education can be scaled to large audiences, the lack of interaction between educators and learners is difficult to replace and remains a primary challenge in the field. Conversational agents may alleviate this problem by engaging in natural interaction and by scaffolding learners' understanding similarly to educators. However, whether this approach can also be used to enrich online video lectures has largely remained unknown. We developed Sara, a conversational agent that appears during an online video lecture. She provides scaffolds by voice and text when needed and includes a voice-based input mode. An evaluation with 182 learners in a 2 x 2 lab experiment demonstrated that Sara, compared to more traditional conversational agents, significantly improved learning in a programming task. This study highlights the importance of including scaffolding and voice-based conversational agents in online videos to improve meaningful learning.
Recent advances in Natural Language Processing (NLP) bear the opportunity to design new forms of human-computer interaction with conversational interfaces. We hypothesize that these interfaces can interactively engage students to increase response quality of course evaluations in education compared to the common standard of web surveys. Past research indicates that web surveys come with disadvantages, such as poor response quality caused by inattention, survey fatigue or satisficing behavior. To test if conversational interfaces have a positive impact on the level of enjoyment and the response quality, we design an NLPbased conversational agent and deploy it in a field experiment with 127 students in our lecture and compare it with a web survey as a baseline. Our findings indicate that using conversational agents for evaluations are resulting in higher levels of response quality and level of enjoyment, and are therefore, a promising approach to increase the effectiveness of surveys in general.
Information technology capabilities are growing at an impressive pace and increasingly overstrain the cognitive abilities of users. User assistance systems such as online manuals try to help the user in handling these systems. However, there is strong evidence that traditional user assistance systems are not as effective as intended. With the rise of smart personal assistants, such as Amazon's Alexa, user assistance systems are becoming more sophisticated by offering a higher degree of interaction and intelligence. This study proposes a process model to develop Smart Personal Assistants. Using a design science research approach, we first gather requirements from Smart Personal Assistant designers and theory, and later evaluate the process model with developing an Amazon Alexa Skill for a Smart Home system. This paper contributes to the existing user assistance literature by offering a new process model on how to design Smart Personal Assistants for intelligent systems.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.