2016
DOI: 10.1002/ets2.12105
|View full text |Cite
|
Sign up to set email alerts
|

Bootstrapping Development of a Cloud‐Based Spoken Dialog System in the Educational Domain From Scratch Using Crowdsourced Data

Abstract: We propose a crowdsourcing‐based framework to iteratively and rapidly bootstrap a dialog system from scratch for a new domain. We leverage the open‐source modular HALEF dialog system to deploy dialog applications. We illustrate the usefulness of this framework using four different prototype dialog items with applications in the educational domain and present initial results and insights from this endeavor.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
4

Relationship

4
5

Authors

Journals

citations
Cited by 13 publications
(19 citation statements)
references
References 12 publications
0
19
0
Order By: Relevance
“…To test the basic functionality of the presented multimodal dialog system, we are presently conducting a user study deploying multiple language assessment prototypes in a crowdsourcing environment. As of September 2016, more than 20,000 calls have been processed by the system (see a preliminary analysis in Ramanarayanan et al, ).…”
Section: Discussionmentioning
confidence: 99%
“…To test the basic functionality of the presented multimodal dialog system, we are presently conducting a user study deploying multiple language assessment prototypes in a crowdsourcing environment. As of September 2016, more than 20,000 calls have been processed by the system (see a preliminary analysis in Ramanarayanan et al, ).…”
Section: Discussionmentioning
confidence: 99%
“…There is an increasing demand for dialog-based learning and assessment solutions at scale, given the rapidly growing language learning and online education marketplace. Dialog system technologies are one solution capable of addressing and automating this demand at scale (Ramanarayanan et al, 2016). However, such conversational technologies need to be able to provide useful and actionable feedback to users in order for them to be widely adopted.…”
Section: Automated Scoring Of Text Dialogmentioning
confidence: 99%
“…We used Amazon Mechanical Turk for our crowdsourcing data collection experiments. Crowdsourcing has been used in the past for the assessment of dialog systems as well as for collection of dialog interactions (see for instance (McGraw et al, 2010;Rayner et al, 2011;Jurcıcek et al, 2011;Ramanarayanan et al, 2016)). In addition to interacting with the text chatbot interface to complete the conversational task, workers were requested to fill out a 2-3 minute survey regarding different aspects of the interaction, such as their overall experience, how engaged they felt while interacting with the system, how well the system understood them, and basic demographic information.…”
Section: Crowdsourcing Data Collectionmentioning
confidence: 99%