Context] Digital transformation impacts an ever-increasing amount of everyone's business and private life. It is imperative to incorporate user requirements in the development process to design successful information systems (IS). Hence, requirements elicitation (RE) is increasingly performed by users that are novices at contributing requirements to IS development projects.[Objective] We need to develop RE systems that are capable of assisting a wide audience of users in communicating their needs and requirements. Prominent methods, such as elicitation interviews, are challenging to apply in such a context, as time and location constraints limit potential audiences. [Research Method] We present the prototypical self-elicitation system "LadderBot". A conversational agent (CA) enables end-users to articulate needs and requirements on the grounds of the laddering method. The CA mimics a human (expert) interviewer's capability to rephrase questions and provide assistance in the process. An experimental study is proposed to evaluate LadderBot against an established questionnaire-based laddering approach. [Contribution] This work-in-progress introduces the chatbot LadderBot as a tool to guide novice users during requirements self-elicitation using the laddering technique. Furthermore, we present the design of an experimental study and outline the next steps and a vision for the future.
Figure 1: Cody used to extend qualitative coding to unseen data. (a) The user makes an annotation in a text document. (b) The user revises a rule suggestion to defne the created code. (c) Cody searches text for other occurrences (red), and trains a supervised machine learning model to extend manual coding to seen and unseen data (blue).
User feedback on mobile app stores, product forums, and on social media can contain product development insights. There has been a lot of recent research studying this feedback and developing methods to automatically extract requirement-related information. This feedback is generally considered to be the "voice of the users"; however, only a subset of software users provide online feedback. If the demographics of the online feedback givers are not representative of the user base, this introduces the possibility of developing software that does not meet the needs of all users. It is, therefore, important to understand who provides online feedback to ensure the needs of underrepresented groups are not being missed.In this work, we directly survey 1040 software users about their feedback habits, software use, and demographic information. Their responses indicate that there are statistically significant differences in who gives feedback on each online channel, with respect to traditional demographics (gender, age, etc). We also identify key differences in what motivates software users to engage with each of the three channels. Our findings provide valuable context for requirements elicited from online feedback and show that considering information from all channels will provide a more comprehensive view of user needs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.